| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
| |
This reduces JSON page sizes by a significant amount as one tab is one
characters and four spaces is four characters.
Bug: T326065
Depends-On: I67c87484ea4ec23f703480c8d423b800c74f6518
Change-Id: I94e9580d48066b011d33d18751969c43799aa76a
(cherry picked from commit 5d82c7b061af0482bccf42b77c0f470c2da41f57)
|
|
|
|
|
|
|
|
|
| |
implementations of ContentHandler::fillParserOutput MUST set HTML if
$cpoParams->getGenerateHtml() returns true.
Bug: T321319
Change-Id: Ibd43f7420e949666649752dce7072dc35bc1f440
(cherry picked from commit 80d8af47f7dc64d6615f813dc3d69230230d0580)
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| | |
Content::getNativeData() is being deprecated, but a concrete
implementation is needed in AbstractContent to remove it in subclasses.
Bug: T155582
Change-Id: I8232daa71b732918d383259a13c891715de46b20
|
|/
|
|
|
|
|
|
|
|
|
|
|
| |
When invalid JSON is being saved, change the error message from the
generic "invalid-content-data" to "invalid-json-data" with the specific
error passed as param.
Allow extensions to hook into JSON validation, enabling them to apply
additional validations for specific JSON files such as MediaWiki:*.json
config files. The page identity is passed to the hook.
Bug: T313254
Change-Id: If9590c29ed0b871b03a3db8f13e72ee9cfdd7e2b
|
|
|
|
|
|
|
| |
Make the behavior consistent between parse API and common preview.
Bug: T184466
Change-Id: I4e701379a8a94a1874dc95ea0e677e5f00c37d84
|
|\
| |
| |
| | |
effects"
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
In the JavaScript and CSS content handlers we render the page "as
wikitext" solely to generate categories, toc, etc, and then throw that
output away and replace the generated HTML. Simplify the code paths
and the caching by using the canonical options which don't split by
user language, etc.
Three minor issues with the current patch, which can hopefully be
addressed in follow ups:
1. WikiPage::makeParserOptionsFromTitleAndModel() has a very cumbersome
name and arguably doesn't belong in WikiPage in the first place.
T313455 already exists to find a better place for this/way to do this.
2. Title::isConversionTable() requires a downcast of the page reference
to a full title object. This method also probably wants to live
somewhere else.
3. It really would be nice to combine this more properly with
ContentHandler::getParserOutputForIndexing(), but that method
uses a ParserOutputAccess object which requires a PageRecord,
and we don't have a PageRecord available in fillParserOutput().
Bug: T307691
Change-Id: I081105741b507ed49e19cb878550ba4293e09413
|
|\| |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This ensures we don't show ToCs in vector-2022 for Javascript pages
which contain raw <h2> elements.
This version uses the same "canonical options" hack that is used for
language conversion tables. These should both be replaced by a
(not yet existing) mechanism in the future: T313455 is the
task for that.
Bug: T307691
Depends-On: I35e199cca40c0e4359ac493e5806dcf4ae49321c
Change-Id: Iba6a8b6c59bf91e3d06896f0a610c3c3e52e6564
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The old implementation could potentially truncate the string in the
middle of a multi-byte character. For compatibility with the old code,
leave out truncateForDatabase()’s default ellipsis, even though some
other content types use it in their getTextForSummary().
Change-Id: I89a2d9a9e31806e70ff1ba351510c1704ce0685d
|
|/
|
|
|
|
|
|
|
|
| |
The length should be in bytes, not characters: this seems clear from the
fact that the method is implemented using truncateForDatabase() or
substr(), not truncateForVisual() or mb_substr(), and that it is called
with max lengths based on a constant maximum length minus the strlen(),
not mb_strlen(), of other parts (see ContentHandler::getAutosummary()).
Change-Id: I57efa9a29742b3e5497b9d38cd90a6c6a5e7b582
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
WikitextContent methods replaceSection() (used when adding a section
to an existing page) and addSectionHeader() (used when creating a new
page) behaved inconsistently - the former would omit the heading
syntax when the section title was empty, but the latter would not.
They both omit it now.
Some callers of addSectionHeader() handled this case, but others did
not, which caused T311489. (I am removing the checks now, since the
change makes them redundant.)
Bug: T311489
Change-Id: Icd59bcf2b75bf50865f19fac92bddabe0c183dcc
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
So, typehint setComment and then fix tests so they always pass a summary.
This also introduces an api change.
Bug: T311567
Change-Id: I478e62b6753d6009017ab0acc295e6cee7d3b017
|
|\ \
| |/
|/| |
|
| |
| |
| |
| |
| |
| |
| | |
Enable it for JSON content.
Bug: T300644
Change-Id: Ia5c491cd856ca395fb431bcefd63026084b01a99
|
| |
| |
| |
| |
| |
| |
| |
| | |
Don't call MediaWikiServices::getParser() from ContentHandler.
Always use ParserFactory::getInstance().
Bug: T310948
Change-Id: I5fcdc28111e0c5c7d4a76e69b3978402433ebad9
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Following up on the comment I made at Ibbc1423166f4804a5122, make Parser
instance management a ParserFactory responsibility. It is weird for
Parser to have a ParserFactory proxy aspect.
* Add ParserFactory::getMainInstance(), which is equivalent to the old
MediaWikiServices::getParser() and $wgParser.
* Add ParserFactory::getInstance(), which is equivalent to
$wgParser->getFreshInstance(), returning the main instance if it is
free, or a new instance otherwise. The naming is supposed to encourage
it as the default way to get a parser, which will help with the linked
bug.
* Deprecate Parser::getFreshParser() and migrate all core callers.
I left the entry in ServiceWiring.php so that it's not immediately
necessary to migrate ObjectFactory specs that ask for Parser.
Bug: T310948
Change-Id: I762b191e978c2d1bbc9f332c9cfa047888ce2e67
|
|
|
|
|
|
| |
Remove dead code and fix typos. Should cause no change in behavior.
Change-Id: I5d293b842bc93a28b8bcd799a31b5e6e30fe692e
|
|
|
|
|
|
| |
It has been deprecated since 1.35 and it is unused.
Change-Id: I3a9e207d1e4cd8d2d3386934926299427d11785f
|
|
|
|
|
|
|
|
| |
This is a quick find & replace of calls to the deprecated method
ParserOptions::newCanonical() when the context is the string literal
'canonical'. This can be safely replaced by called newFromAnon().
Change-Id: If7bb68459b11e0c5f5de188f10fdae85ad1a78bf
|
|
|
|
|
| |
Bug: T286139
Change-Id: I17b144f62a3e8f42d221d6476651dbd4c7554524
|
|
|
|
|
|
|
| |
Renamed in a67cad6 in 1.36
The fallback content is designed for internal use only
Change-Id: I6977a9cf1eab0e701fd6969b1a2b5d1dfcecfdfa
|
|
|
|
|
| |
Bug: T303596
Change-Id: Ib3b00a8cfabeb12723ac6a441495d72fd0c0ca92
|
|
|
|
|
|
| |
Found using IntelliJ's "Typo" code inspection.
Change-Id: I746220ebe6e1e39f6cb503390ec9053e6518cf16
|
|
|
|
|
|
|
| |
Redirect chains have never worked as intended.
Bug: T296430
Change-Id: If0e514c57b8f3d857d956a581f1b549518ecb099
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously:
* It was unclear that generate-html is an optional optimization
* Most of MediaWiki core was doing $parserOutput->setText('') if
html wasn't generated. However this is wrong and will cause
$parserOutput->hasText() to return true and also potentially cause
cache pollution if a content handler both does that and supports
parser cache (Like MassMessage; see T299896)
* The default value of mText in the constructor was '', and most
of the time MW used that default. This doesn't seem right. If
setText() is never called, the ParserOutput should not be considered
to have text
* It was impossible to set mText to null, as $parserOutput->setText(null)
was a no-op. Docs implied you were supposed to do this, so it was very
confusing.
This patch clarifies docs, changes the default value for ParserOutput::$mText
from '' to null, and makes $parserOutput->setText(null) do what you
expect it to. The last two are arguably breaking changes, although
the previous behaviours were unexpected, mostly undocumented and
based on a code search do not appear to be relied on.
It seems like the main reason this only broke MassMessage is most
content handlers either don't support generateHtml, or they don't
support parser cache.
Bug: T306591
Change-Id: I49cdf21411c6b02ac9a221a13393bebe17c7871e
Depends-On: I68ad491735b2df13951399312a4f9c37b63a08fa
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Now largely automated:
VARS=$(grep -o "'[A-Za-z0-9_]*'" includes/MainConfigNames.php | \
tr "\n" '|' | sed "s/|$/\n/;s/'//g")
sed -i -E "s/'($VARS)'/MainConfigNames::\1/g" \
$(grep -ERIl "'($VARS)'" includes/)
Then git add -p with lots of error-prone manual checking. Then
semi-manually add all the necessary "use" lines:
vim $(grep -L 'use MediaWiki\\MainConfigNames;' \
$(git diff --cached --name-only --diff-filter=M HEAD^))
I didn't bother fixing lines that were over 100 characters unless they
were over 120 and triggered phpcs.
Bug: T305805
Change-Id: I74e0ab511abecb276717ad4276a124760a268147
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This edition brought to you by:
grep -ERIn $(grep -o "'[A-Za-z0-9_]*'" includes/MainConfigNames.php | tr
"\n" '|' | sed 's/|$/\n/') includes/
I only corrected a fraction of the results provided by that command. I'm
submitting the partial patch now so it doesn't bitrot.
Bug: T305805
Change-Id: If1918c0b3d88cdf90403921e4310740e206d6962
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Indicate whether a class is a service (to be found via MediaWikiServices)
or a lower-level class for certain backend logic.
* Indicate how to create / where to get instances of non-service classes,
e.g. point to the relevant service.
* Remove copy-pasta text in file docblock that is unrelated,
and incorporate any relevant text into the class docblock instead.
Change-Id: Ia3b9b8c22da4d7160c5e14ae6a6a7c9dca30e9db
|
|
|
|
|
|
|
| |
it may be null in some cases.
Bug: T305169
Change-Id: I00bf78e6d46392244cbf95344f782ffe3c55dbb6
|
|
|
|
|
|
|
|
|
| |
This is needed to make sure CirrusSearch doesn't overwhelm parsercache.
Follows-up I23c053df4c (T302620).
Bug: T285993
Change-Id: Ia5fc3b063c45cb43fdee16f44da2270847773945
|
|
|
|
|
|
|
|
|
|
|
| |
Make phan stricter about null types by setting null_casts_as_any_type to
false (the default in mediawiki-phan-config)
Remaining false positive issues are suppressed.
The suppression and the setting change can only be done together
Bug: T242536
Bug: T301991
Change-Id: I0f295382b96fb3be8037a01c10487d9d591e7e01
|
|\ |
|
| |
| |
| |
| |
| |
| |
| | |
Depends-On: I99c5e5664d2401c36a9890f148eba7c25e6e8324
Depends-On: I48ab818b2965da14af15ef370aa83ad9455badd9
Depends-On: I018371e4b77911e56152ca7b2df734afc73f58a5
Change-Id: I04ebdb52102f6191d49a9cc70b1f98308299e72f
|
|\ \ |
|
| |/
| |
| |
| |
| |
| |
| |
| |
| | |
The functions returning null or the class property is set explict null.
Some function should not accept null or return null.
Found by phan strict checks
Change-Id: Ie50f23249282cdb18caa332f562a3945a58d86ff
|
|/
|
|
|
|
| |
Found by phan strict checks
Change-Id: If41d16b473baddd92cc4261cdc2bfbe65fedcb19
|
|
|
|
|
|
|
| |
This utilizes class cache to avoid duplicate parses
Bug: T302620
Change-Id: I23c053df4cca5b701d2edafc07c484702f2cc85e
|
|
|
|
|
|
|
|
|
| |
This is adding a lot of PC entries for pages that possibly won't ever
be visited. This is similar to what Refreshlinks job does and doesn't
save at all.
Bug: T285993
Change-Id: I68c14932d568795ab54074e073eab2a80517ed70
|
|
|
|
|
|
|
|
| |
This also marks JsonContentHandler as stable to extend, which
was missing from the parent patch.
Bug: T275976
Change-Id: Ied8c2930017bc9ec28e522a774da1050b2b1ffde
|
|
|
|
|
| |
Bug: T275976
Change-Id: Ib567e3cddbed93e41ca2636b39e28b352066af14
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In PHP 8.1 the default $flags argument to htmlspecialchars() has changed
from ENT_COMPAT to ENT_QUOTES | ENT_SUBSTITUTE | ENT_HTML401. This
breaks some tests.
I changed all the calls that break unit tests, and some others
based on a quick code review. A lot of callers just use the default for
convenience, and were already over-quoting, so the default should still
be good enough for them.
Change-Id: Ie9fbeae6f0417c6cf29dceaf429243a135f9fecb
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We always implicitly converted a string argument to an array anyway; just
ask the caller to do this instead so that we can have a simpler and
more straight-forward method signature which matches the plural form
of the method name.
Part of the ParserOutput API cleanup / Parsoid unification discussed
in T287216.
In a number of places we also rename $out to $parserOutput, to make it
easier for codesearch (and human readers) to distinguish between
ParserOutput and OutputPage methods.
Code search:
https://codesearch.wmcloud.org/deployed/?q=p%28arser%29%3F%28Out%7Cout%29%28put%29%3F-%3EaddModule%28Style%29%3Fs%5C%28&i=nope&files=&excludeFiles=&repos=
https://codesearch.wmcloud.org/deployed/?q=arser-%3EgetOutput%5C%28%5C%29-%3EaddModule%28Style%29%3Fs%5C%28&i=nope&files=&excludeFiles=&repos=
Bug: T296123
Depends-On: Iedea960bd450474966eb60ff8dfbf31c127025b6
Depends-On: I7900c5746a9ea75ce4918ffd97d45128038ab3f0
Depends-On: If29dc1d696b3a4c249fa9b150cedf2a502796ea1
Depends-On: I8f1bc7233a00382123a9b1b0bb549bd4dbc4a095
Depends-On: I52dda72aee6c7784a8961488c437863e31affc17
Depends-On: Ia1dcc86cb64f6aa39c68403d37bd76f970e55b97
Depends-On: Ib89ef9c900514d50173e13ab49d17c312b729900
Depends-On: If54244a0278d532c8553029c487c916068e1300f
Depends-On: I8d9b34f5d1ed5b1534bb29f5cd6edcdc086b71ca
Depends-On: I068f9f8e85e88a5c457d40e6a92f09b7eddd6b81
Depends-On: Iced2fc7b4f3cda5296532f22d233875bbc2f5d1b
Depends-On: If14866f76703aa62d33e197bb18a5eacde7a55c0
Depends-On: I9b7fe5acee73c3a378153c0820b46816164ebf21
Depends-On: I95858c08bce0d90709ac7771a910f73d78cc8be4
Depends-On: If9a70e8f8545d4f9ee3b605ad849dbd7de742fc1
Depends-On: I982c81e1ad73b58a90649648e19501cf9172d493
Depends-On: I53a8fd22b22c93bba703233b62377c49ba9f5562
Depends-On: Ic532bca4348b17882716fcb2ca8656a04766c095
Depends-On: If34330acf97d2c4e357b693b086264a718738fb1
Change-Id: Ie4d6bbe258cc483d5693f7a27dbccb60d8f37e2c
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Make it safe to parse articles while in the parser, by always calling
getFreshParser() from WikitextContentHandler.
I think ideally this should be a ParserFactory responsibility, with
Parser instances stored by ParserFactory instead of directly by
ServiceContainer, but this fixes the bug, follows existing conventions,
and does not reduce performance in the usual case.
Bug: T299149
Change-Id: Ibbc1423166f4804a5122de10293ea26f5704d96d
|
|\ |
|
| |
| |
| |
| |
| | |
Bug: T286694
Change-Id: Ifc58dd478592be49dd55caddfc9aeb536da1e1d6
|
|/
|
|
|
|
|
|
|
|
|
|
| |
Automatically refactors wg prefixed globals to use MediaWikiServices config using Rector. Doesn't include files that set globals or files that fail CI.
Rector Gist: https://gist.github.com/tchin25/7cc54f6d23aedef010b22e4dfbead228
* This patch uses a modified source code rector library for our specific use case and the rector will have different effects without it.
A writeup for future reference is here: https://meta.wikimedia.org/wiki/User:TChin_(WMF)/Using_Rector_On_MediaWiki
Change-Id: I1a691f01cd82e60bf41207d32501edb4b9835e37
|
|\ |
|