| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
| |
Bug: T390735
Change-Id: I0ebec537bb15925e8507ee6934cd4a17973c536a
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Rather than reset `Parser::$mStripExtTags` after post-processing, ensure
that it is `false` for any frame expansion done using the legacy parser
kept by Parsoid's DataAccess.
This was causing issues with Extension:Babel, which was invoking
$parser->replaceVariables() recursively when the {{#babel}} parser
function was expanded by Parsoid.
This new behavior is enabled by setting $wgParsoidFragmentSupport to
'v3', in order to allow us to run round-trip testing with this
configuration without disturbing production. The configuration
variable is temporary, and will be cleaned up in
Ib5365c87ab594a2c21a84ec8bc2a64a71799085f.
Bug: T390420
Change-Id: I8f45ea027776c3bb0c9f4468afa00465e41b6dec
|
|
|
|
|
| |
Bug: T353458
Change-Id: I3cf44dfe5425f2efb8409c83571c427447b053af
|
|
|
|
|
|
|
| |
This prepares the way for a revision of this API in the follow-up
patch Ie543457d5a2eba2ef1f1f4b7622531582c48c3e4.
Change-Id: I29b843daeb614d1f48009e1ade93c16fe2f16736
|
|\
| |
| |
| | |
NotificationEnvelope"
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
To allow ourselves esier processing/modifying Notifications lets
introduce an idea of NotificationsEnvelope which represents a
Notification being sent and list of recipients.
The middleware approach will allow us to modify the Notification
behaviour by letting extensions to inject/modify the Notifications.
Each Middleware will retrieve a list of Envelopes MediaWiki wants to
send. Middlewares should iterate over envelopes and decide if those
want to add/remove/replace Notifications and/or Recipients.
Bug: T387996
Change-Id: Ib3ee35c75b2f4dcfdc516b9259a852dc73c4a778
|
| |
| |
| |
| |
| | |
Bug: T353458
Change-Id: I95690a312e356c45dbeed607d32fb0e4626690cf
|
| |
| |
| |
| |
| | |
Bug: T353458
Change-Id: I2ae4577de79832b082adca282ff73cfabc8f9392
|
|\ \ |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Why:
- "ingress" is the more meaningful concept when implementing listeners
What:
- Add support for DomainEventSubscribers in extension.json
Bug: T389033
Change-Id: I458bd7cd439a2e3213458d994cf87affa5da966b
|
|/ /
| |
| |
| |
| | |
Bug: T353458
Change-Id: Ibe1810f1c71316a9124e1dc6ae405097dafd5267
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This forces SqlBagOStuff to instead of sharding keys, write to n out of
m servers instead. It also reads from those servers as well and in case of
incosistency, picks the value with the highest exptime.
This is mostly for mainstash and allows us to provide stronger
consistency guarantees while allowing for a section to be depooled and
put to maintenance. It basically implements the logic already used by
NoSQL database systems such as Cassandra (There are two types to solve
conflict, quorum or timestamp, Cassandra is using quorum while we are
using timestamp).
There will be some edge cases that it might still pick the wrong value:
- if TTL is set to INDEF
- if the TTL gets shortened for various reasons.
- If we go with two clusters, value is set, one gets depooled, a new
value is set, the depooled one gets pooled and the other depooled and
then read happens.
But all of these are extremely rare edge cases and we should be fine.
This also means if data redundancy is set, locking means all sections
will be locked and removal means all sections must allow the unblock.
Otherwise, the lock will be kept.
Bug: T383327
Change-Id: I80da12396858ee4fc58ae257f6c154b3050df696
|
|
|
|
|
| |
Bug: T387856
Change-Id: I1e206dc4ad8bca2fae78ca4fdf919e3a8ee3c4b5
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This would make it when the editor requests thumbnail with the width of
200px, and we have defined the steps as [...,150px,250px,...], the
thumbnail with the size of 250px to be picked but still the width
attribute in HTML will be set to force the requested size.
This change will massively reduce the storage of thumbnails that have
been causing issues for us recently and improves cache hit ratio in
every layer (from client-side to CDN frontend and backend).
Tested locally and it worked just fine.
Bug: T360589
Change-Id: I9110d4ac9bcd421b07f13deeae5d863ef1ef9c31
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
To respect all genders, fix comments to not assume users use binary
pronouns (or even that users are “he”s), but rather use singular “they”.
Also fix some typos that happened to result in gendered pronouns, and a
few incorrect commas and missing articles in comments near the fixed
pronouns.
I skipped four files:
- HISTORY – the release notes were made with the wording they were made
with, I’m not sure if rewording them afterwards is okay
- tests/phpunit/data/preprocess/All_system_messages.{txt,expected} –
these are test cases generated from somewhere, I’d regenerate them
rather than updating
- languages/i18n/qqq.json – fixed on Translatewiki instead to make their
edit histories more useful
Bug: T387626
Change-Id: I282406a0e1407be548e917735fe7eb9a6bf8b136
|
|\ |
|
| |
| |
| |
| |
| | |
Bug: T322944
Change-Id: I4e142ec5eba2dc05afe947f138bea043e0667151
|
|\| |
|
| |
| |
| |
| |
| | |
Bug: T322944
Change-Id: I6de31143e67e14d14aeaf7df04f1cbe257cf56bb
|
|\| |
|
| |
| |
| |
| |
| |
| |
| |
| | |
The feature is enabled and working in Wikimedia production – I think
that’s good enough to call it properly supported.
Bug: T322944
Change-Id: I8fc310d4ab4fc3e17cbeaadfc8eadb4d2120ebda
|
|\ \ |
|
| | |
| | |
| | |
| | |
| | | |
Bug: T383501
Change-Id: I9fb473c0ebbc7b002aff513b0630d18d9cbd68d3
|
| |/
|/|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Why:
- The TSP team would like to change the way expired temporary account
user links are displayed, which requires an efficient way to fetch
their registration timestamps.
- On WMF wikis, which use CentralAuth, this requires fetching the first
(i.e. global) registration timestamp of the account, rather than the
naïve approach of using the registration timestamp from the local user
table.
- MediaWiki provides the UserRegistrationLookup facade to transparently
fetch the earliest registration timestamp for a single user, but
offers no batch interface to do the same.
- Since user links are often rendered in large pagers, a batch interface
is needed.
What:
- Add IUserRegistrationProvider::fetchRegistrationBatch(), which takes
an iterable of UserIdentities and returns a map of their registration
timestamps (or null if not available), keyed by user ID. Although this
interface is marked as stable to implement, its sole non-core
implementor according to codesearch is CentralAuth.
- Add UserRegistrationLookup::getFirstRegistrationBatch(), which
delegates to fetchRegistrationBatch() on configured registration
providers and returns the earliest registration timestamp for each user
in the batch.
- To avoid potential interface incompatibility in WMF production, this
depends on CentralAuth implementing the new IUserRegistrationProvider
method first.
Bug: T358469
Depends-On: Ibe28163e962161567d486607e36d999a36a1e604
Change-Id: I1f6af2693a8f0c5c854b8a6b04edd1eb21934007
|
|\ \ |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Create a provider pattern for sending notifications (T383992).
There is no base notifications handler in core; they can only
be provided by extensions.
A handler in the Echo extension, compatible with the existing
Echo notifications system, will be implemented in T383993.
Co-Authored-By: Piotr Miazga <pmiazga@wikimedia.org>
Bug: T383992
Change-Id: I16b309935c3d29c3cde4459b5e36abce063a8534
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Add a new option to exempt users from autoblocks in the configuration,
instead of editing a MediaWiki space page on every wiki. The use case
for this is WMCS ranges (see T386689).
Bug: T240542
Change-Id: I704b34b81214e7a1ac819fefa7ad3c2c87305647
|
|\ \ \ |
|
| |/ /
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
These are 2D lists, so the default array_merge will overwrite values
instead of merging.
Bug: T386210
Change-Id: Id001462b17ff43964af4f627ca40f07cb198eab2
|
|/ /
| |
| |
| |
| |
| |
| | |
Also avoid null as array item,
as that is not allowed according to the return type
Change-Id: I4083c55a69d6186448a13f35f18d96bfe9ffd23c
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
RenameUserJob is moved to Job/RenameUserTableJob because there are two kinds of jobs now.
The newly added RenameUserDerivedJob is used for performing user-renames across
a wiki family using virtual domains or shared tables. Most code are
moved from SpecialRenameUser and maintenance/renameUser.
The new service, RenameUserFactory is added for constructing RenameUser
easier.
When a global rename happen, the central wiki will enqueue
RenameUserDerivedJobs for other wikis in the same family.
The derived jobs will check if the central wiki has the same user table
as local, and perform updates to local tables.
A new user-right 'renameuser-global' is also added because wiki families may
want global users to be renamed only by a limit set of users or
on a certain global wiki.
Bug: T104830
Change-Id: Ic4120cbd9a4850dfe22d009daa171199fe7c5e39
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This was used to test an experimental parsoid feature before deployment,
but the testing was successful.
Bug: T382464
Follows-Up: I194a9550500bf7ece215791c51d6feb78a80b1a8
Change-Id: Ib91a17868352722dc3570b07856423733f1b2368
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
If a session provider is safe against CSRF (e.g. OAuth), we can allow
cross-origin requests to be non-anonymous. This makes it possible to
have fully client-side web applications that authenticate users via an
OAuth 2.0 client (necessarily a non-confidential client) and then make
authorized requests against wikis using the Authorization header.
To opt into this new mode of CORS requests, we use a new boolean
parameter called "crossorigin". (An earlier version of this change
reused the existing "origin=*" parameter for this, but the change to its
previous “always anonymous” behavior was not welcomed during code
review.) The parameter is disabled by default via a config setting,
which is currently declared experimental; if this works out in practice,
we’ll presumably want to at least change it to non-experimental, though
I don’t know if we want to enable the feature by default (or even
unconditionally) or keep the setting as it is.
Note that the preflight request doesn’t send the real Authorization
header (it just includes its name in Access-Control-Request-Headers), so
the session provider in the preflight request is still the normal cookie
provider (which is why handleCORS() has to bypass the safeAgainstCsrf()
check in that case). This shouldn’t be an issue, because
executeActionWithErrorHandling() returns quite early if the request is
an OPTIONS request (immediately after handleCORS()), but to be sure that
the unsafe session isn’t used during the preflight request, I added a
"crossorigin" check to lacksSameOriginSecurity(). (That method is called
by the constructor before the param validator has been set up, so
$this->getParameter() is not available – hence the call to
$request->getCheck() instead, just as for the 'callback' parameter.)
Bug: T322944
Change-Id: I41200852ee5d22a36429ffadb049ec3076804c78
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
* Plumb this value into Parsoid's SiteConfig so that the Parsoid
library code can access this.
Bug: T373253
Bug: T385129
Change-Id: If119ff94e65325fc446ca068e0b2d2434c070a2e
|
|\ \ |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This allows Parsoid to mark parses which contain async content which
is "not ready yet". At the moment this output is cached with a reduced
TTL, although in the future it might still be treated as uncacheable,
cached until evicted, or some other option.
The HAS_ASYNC_CONTENT flag along with ParserOutput::hasReducedExpiry()
ensures that RefreshLinksJob is opportunistically reinvoked whenever
the page is reparsed, since the asynchronous content may change the
metadata for the page when it becomes ready.
As describe in T373256, ::hasReducedExpiry() is misnamed now, and a
follow-up patch will probably rename it to ::hasDynamicContent() or
something like that. What it really means is "RefreshLinksJob must
be re-run on every parse, because the content may change on each
parse". In the past we would *also* reduce the cache time for
pages like this. But for asynchronous content, "the content may
change on each parse" only *until* the asynchronous content is
"ready". Once it is ready the contents will no longer change, and
the cache lifetime can be raised again -- but ::hasDynamicContent()
still needs to be set, which in the future will mean "you need to
check that RefreshLinksJob has last run" not "you must always run
RefreshLinksJob".
Asynchronous content will always set HAS_ASYNC_CONTENT, even after
the content is "ready", but will only set ASYNC_NOT_READY if it
needed to use placeholder content in this render.
Bug: T373256
Change-Id: I71e10f8a9133c16ebd9120c23c965b9ff20dabd2
|
|/ /
| |
| |
| |
| |
| |
| |
| |
| | |
Remove the 'googlesearch' and 'search-external' messages, so that
there is no fallback search form if $wgDisableTextSearch=true and
$wgSearchForwardUrl=false.
Bug: T384678
Change-Id: I20a3fe8484424427de5dcc55098a09114fedaf66
|
| |
| |
| |
| |
| |
| |
| | |
* This has to ride the train with Parsoid's changes
Bug: T382464
Change-Id: I92a81d41d284a9b272d3f0d6cbdc5b022d051f57
|
|\ \ |
|
| | |
| | |
| | |
| | |
| | | |
Bug: T299951
Change-Id: Ifd9876bcb452e412b7335741e74cfc4c820aa248
|
|\ \ \ |
|
| |/ /
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
* If Parsoid calls the preprocessor, initialize lineStart to true.
Track this through:
- parser function calls that return expandable template messages
int: parser function is an example in core
Extensions seem to define a number of other such parser functions
- template-arg substitions
So {{templatename|mytemplate}} with text {{{{{1}}}}} which
is effectively a call to {{mytemplate}} continues to set sol-state
to true across the expansion.
See test "Preprocessor precedence 5: tplarg takes precedence over template"
in preprocessor.txt which exercises this use case.
- However, note that this is a best-faith effort because this flag is
set while building the preprocessor DOM tree before templates are
expanded. So, this is mostly a source syntax flag and constructs
that expand to empty strings can blind the preprocessor to the true
value of SOL state in the expanded string. This is true for both
the legacy parser and Parsoid, and as such T2529 behavior is a hack
with a set of associated edge cases.
* Parsoid models templates as independent documents as always starting
in start-of-line state (and does some patch up for b/c reasons where
this assumption fails). So, there is no reason to add newlines for
some set of wikitext characters (per T2529) when Parsoid is involved.
* This lets us eliminate some hacks in Parsoid to strip these added newlines
when Parsoid was already in SOL state but which then introduces edge cases.
See discussion in T382464 where Parsoid currently cannot distinguish
between a couple of test cases.
* But, with this change, where Parsoid no longer gets a newline added,
Parsoid doesn't have to heuristically remove the newline (and
incorrectly as in the edge case in the bug report) which eliminates
the edge case from the bug report.
* This change has to be backed by a change in Parsoid to undo the T2529
newline removal hack in TokenStreamPatcher to ensure Parsoid CI
doesn't break with this change.
* To let us safely test this in Parsoid's round-trip testing and safely
(and conservatively) roll this out to production, this change is
backed by a new config flag (ParsoidTemplateExpansionMode) which
defaults to false.
We unconditionally set this to true in the ParserTestRunner for all
parser tests.
This flag will be removed once we roll out this change and the
Parsoid change to production.
Bug: T382464
Change-Id: I194a9550500bf7ece215791c51d6feb78a80b1a8
|
|/ /
| |
| |
| |
| |
| |
| |
| | |
Issues spotted while working on I03a9a6945ab27e9888ea21b03985ed713f0a9b50.
Some code style improvements too.
Change-Id: I409d0a1805aa7430cc86e53633f4f85ef8a76dcf
|
|/
|
|
|
|
|
|
| |
adds a new tag in core titled mw-recreated to note when a new page is a recreation
Bug: T56145
Co-Authored-by: Rockingpenny4 <rockingpenny4@gmail.com>
Change-Id: Ib8ffe3fba73d0464f3fd353138456b07e7afc7d7
|
|\ |
|
| |
| |
| |
| |
| |
| |
| | |
Make use of a reusable definition of ObjectSpec structures, for
consistency across all parts of extension.json that uses object specs.
Change-Id: Ie09b933b9419523cdc62dcba79f86c5bf4242ac3
|
|\ \ |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Service is no longer running, so it's a default that doesn't do anything
Bug: T382987
Change-Id: I3a21c12ba689928d38e410cbe2547ab7e616ac8a
|
|/ /
| |
| |
| |
| | |
Bug: T368113
Change-Id: I8d98d187ba4f1342167820b5710f5382b2ac4831
|