Wikipedia:Bot requests/Archive 79
This is an archive of past discussions about Wikipedia:Bot requests. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current main page. |
Archive 75 | ← | Archive 77 | Archive 78 | Archive 79 | Archive 80 | Archive 81 | → | Archive 85 |
One-off task: subst Template:User Corei7/blue and Template:User Corei7/grey
Userboxes {{User Corei7/blue}} and {{User Corei7/grey}} are now delegating to Template:User Corei7 per WP:WPUBX#Merge. To delete them, their usages need to be substituted. —andrybak (talk) 22:54, 29 June 2019 (UTC)
- Andrybak, put
<noinclude>{{subst only|auto=yes}}</noinclude>
on each UBX and a bot will subst all the uses. Primefac (talk) 01:08, 30 June 2019 (UTC)- Thank you. —andrybak (talk) 01:10, 30 June 2019 (UTC)
- Done by AnomieBOT. —andrybak (talk) 13:07, 30 June 2019 (UTC)
Finish WikiProject merge by updating talk templates
Hi all. WikiProject Waterfalls was recently merged into WikiProject Rivers as a taskforce. I'm looking for help updating the project talk page templates to reflect the change. Basically I'd like a bot to go through each talk page with {{WikiProject Waterfalls}}
and follow the following logic:
- If the page doesn't have
{{WikiProject Rivers}}
then remove the WP:Waterfalls template and replace it with WP:Rivers with|waterfalls=yes
. - If the page already has
{{WikiProject Rivers}}
then remove the WP:Waterfalls template and add|waterfalls=yes
to the WP:Rivers template.
Just to give a sense of scale, there are just over 1,000 articles currently tagged with {{WikiProject Waterfalls}}
. Also WikiProject Waterfalls did not categorize articles by quality and importance, so you don't need to worry about those when swapping the tags. Any help would be much appreciated!! Thanks! Ajpolino (talk) 22:31, 28 June 2019 (UTC)
- BRFA filed, see Wikipedia:Bots/Requests for approval/DannyS712 bot 51 --DannyS712 (talk) 05:20, 29 June 2019 (UTC)
- Done (Forgot to post when I did it.) — JJMC89 (T·C) 02:28, 30 June 2019 (UTC)
Check if certain links work
In User:Headbomb/Sandbox, if someone could build a script checking if the various {{hdl}} links work, and mark them as such, that would be great.
For example, hdl:2152/4115 works, but hdl:99999/1062 doesn't.
Headbomb {t · c · p · b} 18:50, 27 June 2019 (UTC)
- Betacommand (talk · contribs) has said (on IRC) they'd take care or this by the weekend, btw. So feel free to take other requests instead. Headbomb {t · c · p · b} 22:08, 27 June 2019 (UTC)
Bot that will automatically delete pages tagged with db-u1
The bot should not perform the deletion until the tag has been there for 5-10 minutes, to minimize the number of users having to go to WP:REFUND because the bot deleted it after they decided they didn't want the page deleted. InvalidOS (talk) 16:05, 24 May 2019 (UTC)
- @InvalidOS: it would also need to check that it was added by the actual user and that there are no other significant contributors. Since "significant" is subjective (and thus a context-bot), this would likely require checking that there are no other contributors at all. --DannyS712 (talk) 16:20, 24 May 2019 (UTC)
- I was thinking to check if the page creator is the one who placed the tag. That works too, though. InvalidOS (talk) 16:26, 24 May 2019 (UTC)
- I'm not sure if this is a serious proposal, but speedy deletions should be reviewed by an admin, and not automatically deleted after 5 minutes by a bot. Making a change like this would require a very broad consultation, which doesn't appear to have happened here, as this opens up the U1 criteria to a lot of potential misuse. Furthermore, any bot that automatically deletes pages would have to be operated by an administrator, who would then be responsible for every edit the bot performs per WP:ADMINACCT. Unless there is an admin who believes this is a good idea and is willing to take this on, this proposal is a waste of time. – bradv🍁 17:09, 24 May 2019 (UTC)
- This suggestion comes up fairly regularly. So far it has always turned out that the admins processing U1 deletion requests don't feel a need for bot help to keep on top of things. Anomie⚔ 18:55, 24 May 2019 (UTC)
- Maybe because one or two of our admins seem to work much like bots do... --Redrose64 🌹 (talk) 19:31, 24 May 2019 (UTC)
- +1. I know this is a rather perennial proposal, but I don't see why it is necessary to make human admins to do work that is fit for bots. I don't think there would be any problems in having a bot delete U1-tagged pages if the following conditions are satisfied: (1) The page is in User:Example's userspace and has not been edited by anyone apart from User:Example. (2) The page was not moved in from another page, that is, there are no edit summaries that begin with
Example moved page [[.*?]] to [[thisPageName]]
. SD0001 (talk) 04:15, 25 May 2019 (UTC)- I'm with Anomie on this one - how often do we really have a "U1 backlog"? I'd like to see stats before we do any sort of pre-planning for a likely-controversial and possibly false-positive-ridden bot that deletes pages; if there are only a half-dozen U1 requests a day, an admin can easily take care of them. I know I personally haven't seen more than about a dozen at the most at any given time. Primefac (talk) 12:43, 25 May 2019 (UTC)
- Is it necessary to have a backlog to automate work? SD0001 (talk) 20:58, 25 May 2019 (UTC)
- No, but we only set up automated tasks when there is a demonstrable need for the automation. --Redrose64 🌹 (talk) 21:37, 25 May 2019 (UTC)
- Is it necessary to have a backlog to automate work? SD0001 (talk) 20:58, 25 May 2019 (UTC)
- I'm with Anomie on this one - how often do we really have a "U1 backlog"? I'd like to see stats before we do any sort of pre-planning for a likely-controversial and possibly false-positive-ridden bot that deletes pages; if there are only a half-dozen U1 requests a day, an admin can easily take care of them. I know I personally haven't seen more than about a dozen at the most at any given time. Primefac (talk) 12:43, 25 May 2019 (UTC)
- +1. I know this is a rather perennial proposal, but I don't see why it is necessary to make human admins to do work that is fit for bots. I don't think there would be any problems in having a bot delete U1-tagged pages if the following conditions are satisfied: (1) The page is in User:Example's userspace and has not been edited by anyone apart from User:Example. (2) The page was not moved in from another page, that is, there are no edit summaries that begin with
- Maybe because one or two of our admins seem to work much like bots do... --Redrose64 🌹 (talk) 19:31, 24 May 2019 (UTC)
A bot to rename 474 pages
Please, a bot is needed to move 474 pages listed here. I am procedurally moving this request from WP:RM/TR to here; see the discussion leading to this. User @Taketa: made the request, I will notify them of about moving the request here so they can explain further if anything is not clear. Please direct any question to them. Thanks. – Ammarpad (talk) 03:33, 1 July 2019 (UTC)
- If a bot is going to do this, it would be nice to have the bot add {{R to monotypic taxon}} to the binomial titles after the moves are made (at which point the binomials will be redirects), and to add Category:Monotypic beetle genera to the genus titles. Plantdrew (talk) 04:20, 1 July 2019 (UTC)
- I did three to see how complicated it is. I think I can knock this out in a day or two with a properly configured AWB. bd2412 T 04:40, 1 July 2019 (UTC)
- Thank you @Ammarpad: for posting this request and thank you @BD2412: for picking it up. If you have any questions let me know. All the best, Taketa (talk) 04:49, 1 July 2019 (UTC)
- I will start configuring tomorrow. The task will include Plantdrew's requests to template the redirect and add the category to the moved articles. bd2412 T 04:50, 1 July 2019 (UTC)
- Thank you @Ammarpad: for posting this request and thank you @BD2412: for picking it up. If you have any questions let me know. All the best, Taketa (talk) 04:49, 1 July 2019 (UTC)
- I had already started on this before seeing BD2412's post. The moves will be done shortly. I was planning on adding the rcat and category too, but someone else can to handle that if they like. — JJMC89 (T·C) 05:37, 1 July 2019 (UTC)
- There are some where another article or disambiguation page is at the target. I've left those alone, but the rest of the moves are done. I'm adding
{{R to monotypic taxon}}
now. — JJMC89 (T·C) 05:53, 1 July 2019 (UTC)- Done Taketa, the ones that didn't get moved need review. BD2412, sorry for eating the cookie that you licked. — JJMC89 (T·C) 06:25, 1 July 2019 (UTC)
- @JJMC89: Thank you. Do you have an overview of articles that did not get moved? Sincerely, Taketa (talk) 06:33, 1 July 2019 (UTC)
- @Taketa: Priscilla hypsiomoides, Cauca contestata, Catuaba sanguinoloenta, Alluaudia insignis, Ludwigia lixoides, Isse punctata, and Antennexocentrus collarti — JJMC89 (T·C) 06:37, 1 July 2019 (UTC)
- I have had a look. They should all keep the name they have. That last one was a mistake by me, it is not a monotypic genus. Thanks everyone for the renames. I will probably be back in 2-4 weeks with a new list. All the best, Taketa (talk) 06:46, 1 July 2019 (UTC)
- @Taketa: Priscilla hypsiomoides, Cauca contestata, Catuaba sanguinoloenta, Alluaudia insignis, Ludwigia lixoides, Isse punctata, and Antennexocentrus collarti — JJMC89 (T·C) 06:37, 1 July 2019 (UTC)
- @JJMC89: Thank you. Do you have an overview of articles that did not get moved? Sincerely, Taketa (talk) 06:33, 1 July 2019 (UTC)
- Done Taketa, the ones that didn't get moved need review. BD2412, sorry for eating the cookie that you licked. — JJMC89 (T·C) 06:25, 1 July 2019 (UTC)
- There are some where another article or disambiguation page is at the target. I've left those alone, but the rest of the moves are done. I'm adding
- I did three to see how complicated it is. I think I can knock this out in a day or two with a properly configured AWB. bd2412 T 04:40, 1 July 2019 (UTC)
Remove top level user pages from templates categories
Please remove user pages (only top level, i.e. without slashes) from Category:WikiProject user templates, like in Special:Diff/901573700. It would be nice if the same could be done for the whole tree of Category:Userboxes, like three templates in Special:Diff/901574283. Cleaning up the surrounding <noinclude></noinclude> if only whitespace is left inside would be great. —andrybak (talk) 20:23, 12 June 2019 (UTC)
- @Andrybak: Doing... --DannyS712 (talk) 20:35, 12 June 2019 (UTC)
- @Andrybak: Done manually using AWB --DannyS712 (talk) 20:54, 12 June 2019 (UTC)
- I think that these were cases where newbies didn't understand either transclusion or substitution, and copypasted the contents of a template's source to their user pages. --Redrose64 🌹 (talk) 23:00, 12 June 2019 (UTC)
- I agree that that appears to have been what happened. --TheSandDoctor Talk 05:59, 17 June 2019 (UTC)
- Indeed. In documentation of some userboxes (and categories) there are even warnings about accidentally categorizing your user page as a template. —andrybak (talk) 01:58, 19 June 2019 (UTC)
- I agree that that appears to have been what happened. --TheSandDoctor Talk 05:59, 17 June 2019 (UTC)
WikiProject tagging
Could someone help with tagging categories and pages with {{WikiProject Television}} banner? Ideally all categories and pages under Category:Television and Category:WikiProject Television except the ones listed below. Pages in the following categories should not tagged (but the categories themselves should):
- Category:Film and television podcasts
- Category:Television people - and all its sub-categories. To make sure, any category with the word "people" in it should be excluded.
- Category:Women in television
- Category:Television videos and DVDs
- Category:Quotations from film and television - except for the sub-categories Category:Saturday Night Live catchphrases and Category:Star Trek sayings
- Category:Television wrestling championships
- Category:Works about television - and all its sub-categories
- Category:WikiProject Television participants
- Any User: namespace page
Note that some of the sub-categories are all also placed in other category trees. There might be some false-positives in the list, but those can be later manually fixed when found out. But currently there are a lot of categories and pages missing the tag, which means they don't show up in the project alert section. Also note that some might have a redirect template, so if possible don't add 2 templates to the same one (ideally it should replaced by the standard version, but I know that cosmetic changes are an issue). --Gonnym (talk) 09:33, 4 May 2019 (UTC)
- @Gonnym: Sure, I can do this. Just to be clear, add tags to category talk and talk namespaces for all pages that are under the 2 categories you requested, except for pages in the categories listed to avoid. --DannyS712 (talk) 18:39, 4 May 2019 (UTC)
- Yes, add the WikiProject tag above to all talk pages of the category, article, template, module, Wikipedia and file namespaces of all pages and categories listed under the two top category trees (so all sub-categories, not just the ones directly in the top category), except for the pages listed in the categories I've listed above (but do tag those categories and sub-categories) as those categories probably have a lot of pages that shouldn't be tagged. And also don't tag if it is already tagged by a redirect template (so there won't be two of the same tags).--Gonnym (talk) 19:06, 4 May 2019 (UTC)
- Please don't ask for "all sub-categories", this has caused much trouble in the past. We prefer an explicit list of categories. --Redrose64 🌹 (talk) 21:46, 4 May 2019 (UTC)
- Complete list:
- Please don't ask for "all sub-categories", this has caused much trouble in the past. We prefer an explicit list of categories. --Redrose64 🌹 (talk) 21:46, 4 May 2019 (UTC)
- Yes, add the WikiProject tag above to all talk pages of the category, article, template, module, Wikipedia and file namespaces of all pages and categories listed under the two top category trees (so all sub-categories, not just the ones directly in the top category), except for the pages listed in the categories I've listed above (but do tag those categories and sub-categories) as those categories probably have a lot of pages that shouldn't be tagged. And also don't tag if it is already tagged by a redirect template (so there won't be two of the same tags).--Gonnym (talk) 19:06, 4 May 2019 (UTC)
To display all subcategories click on the "►": |
---|
and
To display all subcategories click on the "►": |
---|
Exclude the pages in the categories listed in the exclusion section and all pages in the categories in
To display all subcategories click on the "►": |
---|
and
To display all subcategories click on the "►": |
---|
.
- Hope the above list is sufficient. --Gonnym (talk) 13:56, 5 May 2019 (UTC)
- All you did was create JavaScript boxes that will allow someone to look at the current subcategories with a lot of clicking. You didn't actually list or review them. At a glance, within the category tree you point to I see several categories for media properties that include TV shows, so you wind up with their subcategories for non-TV media (films, books, comics, video games, etc) and subcategories for characters not limited to those that appeared on TV. The same goes for major media companies, you would wind up with completely unrelated articles such as Version 7 Unix being tagged (Category:Bell Labs Unices is 5 levels down from Category:AT&T, which is in Category:Cable television companies of the United States). Although probably the most insane example is that Category:Video is just a few levels down from Category:Television by several paths, and from there you get Category:Film itself, Category:Video gaming itself, and so on. Anomie⚔ 17:02, 5 May 2019 (UTC)
- There is no real way to review them all, and asking someone to review thousands of categories is just the same as saying no. As I stated above I'm sure there are false positives, but so what? Anyone spotting an incorrect tag can just revert it. As this isn't a reader-facing change and it isn't even done on the "main" page but on the secondary talk page, the amount of disruption an incorrect tag does is very minimal, while the gain of having thousands of un-tagged pages is great, as those pages now appear in the article alerts. That said, both your issues can be easily solvable by excluding Category:Television companies and its sub-categories and Category:Video. --Gonnym (talk) 19:14, 5 May 2019 (UTC)
- Maybe you could just add that the category name needs to contain the word "television". Or could just look at all categories with television in the title. I've made a list of them at User:WOSlinker/TVCats. -- WOSlinker (talk) 22:06, 5 May 2019 (UTC)
- There is no real way to review them all, and asking someone to review thousands of categories is just the same as saying no. As I stated above I'm sure there are false positives, but so what? Anyone spotting an incorrect tag can just revert it. As this isn't a reader-facing change and it isn't even done on the "main" page but on the secondary talk page, the amount of disruption an incorrect tag does is very minimal, while the gain of having thousands of un-tagged pages is great, as those pages now appear in the article alerts. That said, both your issues can be easily solvable by excluding Category:Television companies and its sub-categories and Category:Video. --Gonnym (talk) 19:14, 5 May 2019 (UTC)
- All you did was create JavaScript boxes that will allow someone to look at the current subcategories with a lot of clicking. You didn't actually list or review them. At a glance, within the category tree you point to I see several categories for media properties that include TV shows, so you wind up with their subcategories for non-TV media (films, books, comics, video games, etc) and subcategories for characters not limited to those that appeared on TV. The same goes for major media companies, you would wind up with completely unrelated articles such as Version 7 Unix being tagged (Category:Bell Labs Unices is 5 levels down from Category:AT&T, which is in Category:Cable television companies of the United States). Although probably the most insane example is that Category:Video is just a few levels down from Category:Television by several paths, and from there you get Category:Film itself, Category:Video gaming itself, and so on. Anomie⚔ 17:02, 5 May 2019 (UTC)
- Hope the above list is sufficient. --Gonnym (talk) 13:56, 5 May 2019 (UTC)
Wikimedia Airplane Type Tagging Bot
I do a fair bit of category tagging on Wikimedia Commons and know that Google's image recognition tools are pretty good these days at recognizing specific types of airplanes in images. When I say they're pretty good I mean they can not just recognize a 737 vs. a 777, which is a good start but they recognize the difference between a 777-300ER vs. a 777-200 which is even better. Would it be possible to develop an airplane bot that only looks though commons:Category:Media needing categories and adds airplane name tags when it finds images that contain recognizable airplanes? I would also like the bot to place a tag requesting human verification for any tagged images. Hopefully this will help in some way to reduce the backlog of around 1,000,000 images needing categories.Monopoly31121993(2) (talk) 11:25, 6 May 2019 (UTC)
- @Monopoly31121993(2): There is no such category as Category:Media needing categories. --Redrose64 🌹 (talk) 19:15, 6 May 2019 (UTC)
- Commons:Category:Media_needing_categories. -- GreenC 19:29, 6 May 2019 (UTC)
- In that case, this should be brought up at c:Commons:Bots/Work requests rather than here. Anomie⚔ 21:06, 6 May 2019 (UTC)
- Commons:Category:Media_needing_categories. -- GreenC 19:29, 6 May 2019 (UTC)
Do you know how to gain access to a Google AI-driven API? I thought Google donated something like that to Wikipedia but can't remember the details. -- GreenC 19:31, 6 May 2019 (UTC)
- I know that Google Cloud Vision has an API but something Google donated to Wikipedia sounds promising and probably more helpful. Does anyone have any knowledge about Google image recognition donations?Monopoly31121993(2) (talk) 20:01, 6 May 2019 (UTC)
- Press release for Cloud Vision donation.
Finding where and how to access..I don't know where to find it or access. -- GreenC 21:41, 6 May 2019 (UTC)
- Press release for Cloud Vision donation.
Update pagelinks to Nova
BOTREQ Invalid Parameter: {{{1}}}.
Hello, can a bot be implemented to change
[[Nova (TV series)]]
or
[[Nova (TV series)|
to
[[Nova (U.S. TV series)]]
or
[[Nova (U.S. TV series)|
To update pagelinks, due to disambiguatory purposes (links to disambiguation page, ambiguous page names)
-- 70.51.201.106 (talk) 16:29, 9 July 2019 (UTC)
- Likely an WP:AWB run, if anything, per WP:CONTEXTBOT. Headbomb {t · c · p · b} 17:45, 9 July 2019 (UTC)
- Headbomb, I'll look into it right now. StudiesWorld (talk) 17:52, 9 July 2019 (UTC)
- Disambiguation is almost literally the definition of context-dependent. It requires a review of the material surrounding the link to verify that you are selecting a correct link. Per WP:CONTEXTBOT, this might reasonably be done with AWB, but should not be performed by an automated program. --Izno (talk) 18:17, 9 July 2019 (UTC)
- I'm not sure this is a CONTEXTBOT: the page has just been moved today, so that any links to the redirect (to the DAB) were links to the US series until a few hours ago, and this request is to clear up after that. Spike 'em (talk) 23:12, 9 July 2019 (UTC)
- Less than 500 pages total, and since it was just changed this should probably be an AWB run to get them changed quickly. Primefac (talk) 23:53, 9 July 2019 (UTC)
- Done, though someone should probably determine which disambiguator to use; there are two
guidelineseditors saying two different things. Primefac (talk) 00:59, 10 July 2019 (UTC)- Not sure what other guideline you know, but WP:NCTV is the only TV NC guideline and it says very clearly to use American as was decided in a recent RfC. --Gonnym (talk) 05:46, 10 July 2019 (UTC)
- I know; I meant to say "two editors say two different things"; I've move-protected the page just to force some dialogue if the second editor is still insistent. Primefac (talk) 14:07, 10 July 2019 (UTC)
- Not sure what other guideline you know, but WP:NCTV is the only TV NC guideline and it says very clearly to use American as was decided in a recent RfC. --Gonnym (talk) 05:46, 10 July 2019 (UTC)
- @Spike 'em: But were they correct links to the US series, or intended links to some non-US series that were mistargeted? Anomie⚔ 12:30, 10 July 2019 (UTC)
- As near as I could tell from the immediate context when I did them, they were all the correct target. But yes, there was the possibility of not pointing to the correct article. Primefac (talk) 14:07, 10 July 2019 (UTC)
- I'm not sure this is a CONTEXTBOT: the page has just been moved today, so that any links to the redirect (to the DAB) were links to the US series until a few hours ago, and this request is to clear up after that. Spike 'em (talk) 23:12, 9 July 2019 (UTC)
Bot to improve names of media sources in references
Many references on Wikipedia point to large media organizations such as the New York Times. However, the names are often abbreviated, not italicized, and/or missing wikilinks to the media organization. I'd like to propose a bot that could go to an article like this one and automatically replace "NY Times" with "New York Times". Other large media organizations (e.g. BBC, Washington Post, and so on) could fairly easily be added, I imagine. - Sdkb (talk) 04:43, 19 November 2018 (UTC)
- What about the Times's page? The page says: 'The New York Times (sometimes abbreviated as the NYT and NY Times)…' The bot might replace those too and that might be a little confusing…The 2nd Red Guy (talk) 14:55, 23 April 2019 (UTC)
- And this page too! Wait, what if it changes its own description on its user page?The 2nd Red Guy (talk) 15:43, 23 April 2019 (UTC)
- I would be wary of WP:CONTEXTBOT. For instance, NYT can refer to a supplement of the Helsingin Sanomat#Format (in addition to the New York Times), and maybe is the main use of Finland-related pages. TigraanClick here to contact me 13:40, 20 November 2018 (UTC)
- @Tigraan:That's a good point. I think it'd be fairly easy to work around that sort of issue, though — before having any bot make any change to a reference, have it check that the URL goes to the expected website. So in the case of the New York Times, if a reference with "NYT" didn't also contain the URL nytimes.com, it wouldn't make the replacement. There might still be some limitations, but given that the bot is already operating only within the limited domain of a specific field of the citation template, I think there's a fairly low risk that it'd make errors. - Sdkb (talk) 10:52, 25 November 2018 (UTC)
- I should add that part of the reason I think this is important is that, in addition to just standardizing content, it'd allow people to more easily check whether a source used in a reference is likely to be reliable. - Sdkb (talk) 22:01, 25 November 2018 (UTC)
- @Sdkb: This is significantly harder than it seems, as most bots are. Wikipedia is one giant exception - the long tail of unexpected gotchas is very long, particular on formatting issues. Another problem is agencies (AP, UPI, Reuters). Often times the NYT is running an agency story. The cite should use NYT in the
|work=
and the agency in the|agency=
but often the agency ends up in the|work=
field, so the bot couldn't blindly make changes without some considerable room for error. I have a sense of what needs to be done: extract every cite on Enwiki with a|url=
containing nytimes.com, extract every|work=
from those and create a unique list, manually remove from the list anything that shouldn't belong like Reuters etc.., then the bot keys off that list before making live changes, it knows what is safe to change (anything in the list). It's just a hell of a job in terms of time and resources considering all the sites to be processed and manual checks involved. See also Wikipedia:Bots/Dictionary#Cosmetic_edit "the term cosmetic edit is often used to encompass all edits of such little value that the community deems them to not be worth making in bulk" .. this is probably a borderline case, though I have no opinion which side of the border it falls other people might during the BRFA. -- GreenC 16:53, 26 November 2018 (UTC)- @GreenC: Thanks for the thought you're putting into considering this idea; I appreciate it. One way the bot could work to avoid that issue is to not key off of URLs, but rather off of the abbreviations. As in, it'd be triggered by the "NYT" in either the work or agency field, and then use the URL just as a confirmation to double check. That way, errors users have made in the citation fields would remain, but at least the format would be improved and no new errors would be introduced. - Sdkb (talk) 08:17, 27 November 2018 (UTC)
- Right that's basically what I was saying also. But to get all the possible abbreviations requires scanning the system because the variety of abbreviations is unknowable ahead of time. Unless pick a few that might be common, but it would miss a lot. -- GreenC 14:54, 27 November 2018 (UTC)
- Well, for NYT at the least, citations with a
|url=https://www.nytimes.com/...
could be safely assumed to be referring to the New York Times. Headbomb {t · c · p · b} 01:20, 8 December 2018 (UTC)- Yeah, I'm not too worried about comprehensiveness for now; I'd mainly just like to see the bot get off the ground and able to handle the two or three most common abbreviation for maybe half a dozen really big newspapers. From there, I imagine, a framework will be in place that'd then allow the bot to expand to other papers or abbreviations over time. - Sdkb (talk) 07:01, 12 December 2018 (UTC)
- Conversation here seems to have died down. Is there anything I can do to move the proposal forward? - Sdkb (talk) 21:42, 14 January 2019 (UTC)
- Yeah, I'm not too worried about comprehensiveness for now; I'd mainly just like to see the bot get off the ground and able to handle the two or three most common abbreviation for maybe half a dozen really big newspapers. From there, I imagine, a framework will be in place that'd then allow the bot to expand to other papers or abbreviations over time. - Sdkb (talk) 07:01, 12 December 2018 (UTC)
- Well, for NYT at the least, citations with a
- Right that's basically what I was saying also. But to get all the possible abbreviations requires scanning the system because the variety of abbreviations is unknowable ahead of time. Unless pick a few that might be common, but it would miss a lot. -- GreenC 14:54, 27 November 2018 (UTC)
- @GreenC: Thanks for the thought you're putting into considering this idea; I appreciate it. One way the bot could work to avoid that issue is to not key off of URLs, but rather off of the abbreviations. As in, it'd be triggered by the "NYT" in either the work or agency field, and then use the URL just as a confirmation to double check. That way, errors users have made in the citation fields would remain, but at least the format would be improved and no new errors would be introduced. - Sdkb (talk) 08:17, 27 November 2018 (UTC)
- @Sdkb: This is significantly harder than it seems, as most bots are. Wikipedia is one giant exception - the long tail of unexpected gotchas is very long, particular on formatting issues. Another problem is agencies (AP, UPI, Reuters). Often times the NYT is running an agency story. The cite should use NYT in the
- I am not against this idea totally but the bot would have to be a very good one for this to be a net positive and not end up creating more work. Emir of Wikipedia (talk) 22:18, 14 January 2019 (UTC)
- @Sdkb: you could build a list of unambiguous cases. E.g.
|work/journal/magazine/newspaper/website=NYT
combined with|url=https://www.nytimes.com/...
. Short of that, it's too much of a WP:CONTEXTBOT. I'll also point out that NY Times isn't exactly obscure/ambiguous either.Headbomb {t · c · p · b} 17:47, 27 January 2019 (UTC)- Okay, here's an initial list:
|work/journal/magazine/newspaper/website=NYT
combined with|url=https://www.nytimes.com/...
|work/journal/magazine/newspaper/website=NYT
combined with|url=https://www.nytimes.com/...
|work/journal/magazine/newspaper/website=NY Times
combined with|url=https://www.nytimes.com/...
|work/journal/magazine/newspaper/website=NY Times
combined with|url=https://www.nytimes.com/...
|work/journal/magazine/newspaper/website=NY Times
combined with|url=https://www.nytimes.com/...
|work/journal/magazine/newspaper/website=NYTimes
combined with|url=https://www.nytimes.com/...
|work/journal/magazine/newspaper/website=New York Times
combined with|url=https://www.nytimes.com/...
|work/journal/magazine/newspaper/website=The New York Times
combined with|url=https://www.nytimes.com/...
|work/journal/magazine/newspaper/website=The New York Times
combined with|url=https://www.nytimes.com/...
|work/journal/magazine/newspaper/website=The New York Times
combined with|url=https://www.nytimes.com/...
|work/journal/magazine/newspaper/website=New York Times
combined with|url=https://www.nytimes.com/...
|work/journal/magazine/newspaper/website=New York Times
combined with|url=https://www.nytimes.com/...
|work/journal/magazine/newspaper/website=New York Times
combined with|url=https://www.nytimes.com/...
|work/journal/magazine/newspaper/website=LA Times
combined with|url=https://www.latimes.com/...
|work/journal/magazine/newspaper/website=L.A. Times
combined with|url=https://www.latimes.com/...
|work/journal/magazine/newspaper/website=Los Angeles Times
combined with|url=https://www.latimes.com/...
|work/journal/magazine/newspaper/website=Los Angeles Times
combined with|url=https://www.latimes.com/...
|work/journal/magazine/newspaper/website=WaPo
combined with|url=https://www.washingtonpost.com/...
|work/journal/magazine/newspaper/website=Wa Po
combined with|url=https://www.washingtonpost.com/...
|work/journal/magazine/newspaper/website=Washington Post
combined with|url=https://www.washingtonpost.com/...
|work/journal/magazine/newspaper/website=Washington Post
combined with|url=https://www.washingtonpost.com/...
|work/journal/magazine/newspaper/website=Washington Post
combined with|url=https://www.washingtonpost.com/...
|work/journal/magazine/newspaper/website=Washington Post
combined with|url=https://www.washingtonpost.com/...
|work/journal/magazine/newspaper/website=The Washington Post
combined with|url=https://www.washingtonpost.com/...
|work/journal/magazine/newspaper/website=The Washington Post
combined with|url=https://www.washingtonpost.com/...
|work/journal/magazine/newspaper/website=The Washington Post
combined with|url=https://www.washingtonpost.com/...
|work/journal/magazine/newspaper/website=WSJ
combined with|url=https://www.wsj.com/...
|work/journal/magazine/newspaper/website=WSJ
combined with|url=https://www.wsj.com/...
|work/journal/magazine/newspaper/website=Wall St. Journal
combined with|url=https://www.wsj.com/...
|work/journal/magazine/newspaper/website=Wall Street Journal
combined with|url=https://www.wsj.com/...
|work/journal/magazine/newspaper/website=Wall Street Journal
combined with|url=https://www.wsj.com/...
|work/journal/magazine/newspaper/website=Wall Street Journal
combined with|url=https://www.wsj.com/...
|work/journal/magazine/newspaper/website=The Wall Street Journal
combined with|url=https://www.wsj.com/...
|work/journal/magazine/newspaper/website=The Wall Street Journal
combined with|url=https://www.wsj.com/...
|work/journal/magazine/newspaper/website=Wall Street Journal
combined with|url=https://www.wsj.com/...
|work/journal/magazine/newspaper/website=The Wall Street Journal
combined with|url=https://www.wsj.com/...
- Okay, here's an initial list:
- @Sdkb: you could build a list of unambiguous cases. E.g.
Sdkb (talk) 03:54, 1 February 2019 (UTC)
- What about BYU to Brigham Young University?The 2nd Red Guy (talk) 15:41, 23 April 2019 (UTC)
- Sorry, I'm not sure what you're proposing here. Is BYU a media source? - Sdkb (talk) 18:07, 19 June 2019 (UTC)
- What about BYU to Brigham Young University?The 2nd Red Guy (talk) 15:41, 23 April 2019 (UTC)
Changing New York Times to The New York Times would be great. I have seen people going through AWB runs doing it, but seems like a waste of human time. Kees08 (Talk) 23:32, 2 February 2019 (UTC)
- @Kees08: Thanks; I added in those cases. - Sdkb (talk) 01:19, 3 February 2019 (UTC)
- Not really sure changing Foobar to The Foobar is desired in many cases. WP:CITEVAR will certainly apply to a few of those. For NYT/NY Times, WaPo/Wa Po, WSJ, LA Times/L.A. Times, are those guaranteed to a refer to a version of these journals that were actually called by the full name? Meaning that was there as some point in the LA Times's history were "LA Times" or some such was featured on the masthead of the publication, in either print or webform? If so, that's a bad bot task. If yes, then there's likely no issue with it. Headbomb {t · c · p · b} 01:54, 3 February 2019 (UTC)
- For the "the" publications, it's part of their name, so referring to just "Foobar" is incorrect usage. (It's admittedly a nitpicky correction, but one we may as well make while we're in the process of making what I'd consider more important improvements, namely adding the wikilinks to help readers more easily verify the reliability of a source.) Regarding the question of whether any of those publications ever used the abbreviated name as a formal name for something, I'd doubt it, as it'd be very confusing, but I'm not fully sure how to check that by Googling. - Sdkb (talk) 21:04, 3 February 2019 (UTC)
- Not really sure changing Foobar to The Foobar is desired in many cases. WP:CITEVAR will certainly apply to a few of those. For NYT/NY Times, WaPo/Wa Po, WSJ, LA Times/L.A. Times, are those guaranteed to a refer to a version of these journals that were actually called by the full name? Meaning that was there as some point in the LA Times's history were "LA Times" or some such was featured on the masthead of the publication, in either print or webform? If so, that's a bad bot task. If yes, then there's likely no issue with it. Headbomb {t · c · p · b} 01:54, 3 February 2019 (UTC)
- The omission of 'the' is a legitimate stylistic variation. And even if 'N.Y. Times' never appeared on the masthead, the expansion of abbreviations (e.g. N.Y. Times / L.A. Times) could also be a legitimate stylistic variation. The acronyms (e.g. NYT/WSJ) are much safer to expand though. Headbomb {t · c · p · b} 21:41, 3 February 2019 (UTC)
- It is a change I have had to do many times since it is brought up in reviews (FAC usually I think). It would be nice if we could find parameters to make it possible. Going by the article, since December 1, 1896, it has been referred to as The New York Times. The ranges are:
- The omission of 'the' is a legitimate stylistic variation. And even if 'N.Y. Times' never appeared on the masthead, the expansion of abbreviations (e.g. N.Y. Times / L.A. Times) could also be a legitimate stylistic variation. The acronyms (e.g. NYT/WSJ) are much safer to expand though. Headbomb {t · c · p · b} 21:41, 3 February 2019 (UTC)
- September 18, 1851–September 13, 1857 New-York Daily Times
- September 14, 1857–November 30, 1896 The New-York Times
- December 1, 1896–current The New York Times
- New York Times has never been the title of the newspaper, and we could use date ranges to verify we do not hit the edge cases of pre-December 1, 1896 The New York Times articles. There is The New York Times International Edition, but it seems like it has a different base-URL than nytimes.com. I can go through the effort to verify the names of the other publications throughout the years, but do you agree with my assessment of The New York Times? Kees08 (Talk) 01:51, 4 February 2019 (UTC)
Is anyone interested in this? I still think it would save myself a lot of editing time. Headbomb did you have further thoughts? Kees08 (Talk) 16:21, 15 March 2019 (UTC)
- @Kees08: I definitely still am, but I'm not sure how to move the proposal forward from here. - Sdkb (talk) 21:45, 21 March 2019 (UTC)
Bot to make a mass nom of subcategories in a tree
Is it possible for a bot to nominate all the subcategories in the tree Category:Screenplays by writer for a rename based on Wikipedia:Categories for discussion/Log/2019 May 10#Category:Screenplays by writer. There is about a thousand of them! I guess each one needs to be tagged with {{subst:CFR||Category:Screenplays by writer}}, and then added to the list at the nom. Is this feasible? --woodensuperman 15:23, 10 May 2019 (UTC)
- @BrownHairedGirl: ? --Izno (talk) 15:53, 10 May 2019 (UTC)
- @Woodensuperman and Izno: technically, I could do this quite easily.
- But I won't do it for a proposal to rename to "Category:Films by writer". Many films are based on books or plays, so "Category:Films by writer" is ambiguous: it could refer either to the writer of the original work, or to the writer of the screenplay.
- I suggest that Woodensuperman should withdraw the CFD nomination, and open a discussion at WT:FILM about possible options ... and only once one or more options have been clarified consider opening a mass nomination. --BrownHairedGirl (talk) • (contribs) 16:03, 10 May 2019 (UTC)
- @DannyS712:, that's one for you, I think? Headbomb {t · c · p · b} 23:46, 12 May 2019 (UTC)
- @Headbomb: yes, but since BHG suggested that the nom be withdrawn and a discussion opened first, I was going to wait and see what woodensuperman says before chiming in --DannyS712 (talk) 23:47, 12 May 2019 (UTC)
- @DannyS712: I don't intend to withdraw the nom. I think a sensible discussion can be had at CFD. --woodensuperman 11:30, 13 May 2019 (UTC)
- @Woodensuperman: I just finished my bot trial, so I can't do this run, sorry --DannyS712 (talk) 21:13, 14 May 2019 (UTC)
- @DannyS712: I don't intend to withdraw the nom. I think a sensible discussion can be had at CFD. --woodensuperman 11:30, 13 May 2019 (UTC)
- @Headbomb: yes, but since BHG suggested that the nom be withdrawn and a discussion opened first, I was going to wait and see what woodensuperman says before chiming in --DannyS712 (talk) 23:47, 12 May 2019 (UTC)
- @DannyS712:, that's one for you, I think? Headbomb {t · c · p · b} 23:46, 12 May 2019 (UTC)
@DannyS712 and Woodensuperman: does this still need doing? Headbomb {t · c · p · b} 01:14, 3 July 2019 (UTC)
- @Headbomb: Well, the nom is still open. It's gone a bit stale though... --woodensuperman 07:52, 3 July 2019 (UTC)
Moscow Metro station article location map
I just created Module:Location map/data/Moscow Metro to replace the altnerative of Module:Location map/data/Russia Moscow Ring Road because the Moscow Metro system has expanded beyond the boundary of the latter map. There are over 100 Moscow Metro station articles need to be updated this way:
- from:
{{Infobox station ... |map_type = Moscow Ring Road |AlternativeMap= Moscow map MKAD grayscale.png |map_overlay = Moscow map MKAD metro line.svg ... }}
to:
{{Infobox station ... |map_type = Moscow Metro ... }}
-- Sameboat - 同舟 (talk · contri.) 04:55, 7 June 2019 (UTC)
- @Sameboat: I am not volunteering at the moment but would help to change some articles manually to show example. Also clarify which articles (ie. all in Category:Moscow Metro stations?) -- GreenC 05:07, 25 June 2019 (UTC)
- @GreenC: Nekrasovka (Moscow Metro) is the article manually converted by myself. Not all Moscow Metro stations need to be converted, only those using the "Moscow Ring Road" location map with altnerative and overlay. As the others using the Central Moscow location map like Kuznetsky Most (Moscow Metro) can remain intact. -- -- Sameboat - 同舟 (talk · contri.) 05:35, 25 June 2019 (UTC)
- @Sameboat: Ok you said "Not all Moscow Metro stations need to be converted, only those using the "Moscow Ring Road" location map with alternative and overlay" and gave this example, however in that example there is no alternative and overlay. Can you provide a precise rule or logic that bot would follow? Thanks. -- GreenC 16:16, 19 July 2019 (UTC)
- @GreenC: Sorry. This one is the more proper editing example. -- Sameboat - 同舟 (talk · contri.) 02:44, 20 July 2019 (UTC)
Update world WPA rankings for infobox
Hi, I'm enquiring about the possibility to have a bot that could generate world rankings from WPApool.com/rankings and generate and update Template:Infobox pool player/rankings. The {{Infobox pool player}} reads this file (as well as Template:Infobox pool player/Euro Tour rankings). One of the biggest issues is the formatting of the rankings, and that the names used arent guarenteed to be the same as that of the article (which is why there is so many name changes in that template.
Is there anyway a bot could help out with this update process? Best Wishes, Lee Vilenski (talk • contribs) 19:06, 25 May 2019 (UTC)
Auto archiving all links to specific defunct websites
So, maybe a bot that does this already exists, in which case, awesome. I've seen this come up twice in the last week, and it occurred to me it should probably be automated. So let's say there's a website, and it is used as a reference on a bunch of articles. The company that maintained the website shuts down, and the domain gets sold to a company that sells dick pills (I'm not being needlessly vulgar, by the way, this is a real example). Every reference to an article on that site now contains a link to a dick pill advertisement. Sometimes the article is archived somewhere, and citation templates have "dead-url=unfit" for this sort of situation, so we can note the original url for editors but never display it to readers.
Anyway, why automation? It might be just a few references, it might be a lot. But in the narrow type of situation I described, all existing article-space links to that site should stop appearing for readers, whether or not an archive can be found. The bot I imagine doing this would just sit around until an operator gave it a domain that has suffered such a fate, and it would get to work, hiding the urls from articles and replacing with archives if they exist. So, does a bot like that exist? Thanks. Someguy1221 (talk) 06:31, 23 May 2019 (UTC)
- @Someguy1221: See User:InternetArchiveBot --DannyS712 (talk) 06:35, 23 May 2019 (UTC)
- I was already aware of that, but even after looking through the runpage, the manual, and even the source code, I couldn't find a way to make that bot actually mark reference templates with "dead-url=unfit"/"dead-url=usurped" or otherwise remove the url from reference templates. Someguy1221 (talk) 01:37, 24 May 2019 (UTC)
User:Someguy1221, interesting points. A bot could readily toggle |dead=unfit
and add an archive link. But there is the question of non-CS1|2 links that have no archive available or non-CS1|2 links that use {{webarchive}}
. One solution create a new template called {{unfit}}
and extract the URL from the square brackets and replace with {{unfit}}
. Examples:
Scenario 1:
- Org:
Author (2019). [http://trickydick.com "Title"]{{dead link}}, accessed 2019
- New
Author (2019). {{unfit|url=http://trickydick.com|title="Title"}}, accessed 2019
..in the New case it would display the title without the hyperlink.
Scenario 2:
- Org:
Author (2019). [http://trickydick.com "Title"] {{webarchive|url=<archiveurl>}}, accessed 2019</ref>
- New:
Author (2019). {{unfit|url=http://trickydick.com|title="Title"}} {{webarchive|url=<archiveurl>}}, accessed 2019
..in the New case it would display the archive URL only.
There are many other scenarios like {{official}}
/ {{URL}}
, bare links with no title or anything else (maybe these could disappear entirely from display), etc.. -- GreenC 21:43, 26 May 2019 (UTC)
- Given the number of sites using unfit, it probably should be done with IABot since it can scale. It would have to get a nod of approval from User:Cyberpower678 though his time and attention which may be limited due to other open projects. In the mean time I might add something to WP:WAYBACKMEDIC which can take requests for bot runs at WP:URLREQ. It could be a proof of concept anyway. -- GreenC 22:09, 26 May 2019 (UTC)
WikiProject Civil Rights Movement
I'm trying to set-up a bot to perform assessment and tagging work for Wikipedia:WikiProject Civil Rights Movement. The bot would need to rely only on keywords present in pages. The bot would provide a list of prospective pages that appear to satisfy rules given it. An example of what the project is seeking is something similar to User:InceptionBot. WikiProject Civil Rights Movement uses that bot to generate report Wikipedia:WikiProject Civil Rights Movement/New articles. Whereas that bot generates a report of new pages, the desired bot would assess old pages. Mitchumch (talk) 16:27, 1 April 2019 (UTC)
- At Wikipedia:Village pump (technical)#Assessment and tagging bot I didn't intend that you should try to set up your own bot. There are plenty of bots already authorised to carry out WikiProject tagging runs. Just describe the selection criteria, and we'll see who picks it up. --Redrose64 🌹 (talk) 19:46, 1 April 2019 (UTC)
- The selection criteria are keywords on pages:
- civil rights movement
- civil rights activist
- black panther party
- black power
- martin luther king
- student nonviolent coordinating committee
- congress of racial equality
- national associaton for the advancement of colored people
- naacp
- urban league
- southern christian leadership conference
- Mitchumch (talk) 22:02, 1 April 2019 (UTC)
- Redrose64 Since no one responded, is there another option? Mitchumch (talk) 20:00, 29 April 2019 (UTC)
- @Mitchumch: Since no one has responded yet, I would like to know something. What if InceptionBot be used to generate both old and new pages? Adithyak1997 (talk) 17:49, 4 June 2019 (UTC)
- @Adithyak1997: My preference would be a separate list for old pages. However, if that is not possible, then a report that combines both old and new pages would be acceptable. Mitchumch (talk) 19:04, 5 June 2019 (UTC)
- @Mitchumch: and @Redrose64: I would like to know whether the following method be feasible to run:
- (i)In the Make List option in AWB, provide the source as Wiki search (text).
- (ii)In the wiki search textbox, provide the text Civil rights movement (as an example) and press make list button.
- (iii)Then, you will get a list of pages containing the word 'civil rights movement'. Then add the category Category:Pages with Civil Rights Movement. Either other keywords be added to the same category or to a new category like Category:Pages with Civil Rights Movement:Civil Rights Movement where the text after colon denotes the keyword. Here, there are some restrictions. Firstly, since I am using a plain method, I don't know how I can increase the limit of the make list option ie. currently only 1000 pages are listed which might needs to be increased. Secondly, I need to know how to skip pages that are already in the category. I think it needs some regex. I have just mentioned a method which I think is easier for me. Do note that this has to be done manually. Once a person starts it, I think it can be helped by other fellow wikipedians also. Adithyak1997 (talk) 17:56, 6 June 2019 (UTC)
- The selection criteria are keywords on pages:
Take over part of User:RonBot
User:Ronhjones has disappeared a while back, and the bot hasn't run since. If someone could takeover Ronbot 10, that would be great. The code is available at User:RonBot/10/Source1, User:RonBot/10/Source2, and User:RonBot/10/Source3. I believe only the last two are relevant however.
The main idea is that the bot sorts and detects unnecessary/duplicates entries in WP:CITEWATCH/SETUP and WP:JCW/EXCLUDE. Headbomb {t · c · p · b} 00:23, 10 June 2019 (UTC)
- @Headbomb: Just leaving a note for myself if anything, but I will look into taking this over later this afternoon. Code is already there so should be rather simple for TSB to take over - possibly filing this afternoon. —TheSandDoctor Talk 21:16, 15 June 2019 (UTC)
- @TheSandDoctor: the code might be in need of minor updates, see this (stuff happening after March 8) and this. But it would still be useful as is if you don't have time to do an update. Headbomb {t · c · p · b} 21:21, 15 June 2019 (UTC)
- Events unplanned came up. Planning to do this this coming week Headbomb. --TheSandDoctor Talk 05:57, 17 June 2019 (UTC)
- No rush. It's a convenient task, but not a critical one. Headbomb {t · c · p · b} 15:17, 17 June 2019 (UTC)
- @TheSandDoctor: any update on this? Headbomb {t · c · p · b} 04:22, 23 June 2019 (UTC)
- Thanks for the prod. Filed, Headbomb. --TheSandDoctor Talk 05:08, 23 June 2019 (UTC)
- @TheSandDoctor: any update on this? Headbomb {t · c · p · b} 04:22, 23 June 2019 (UTC)
- No rush. It's a convenient task, but not a critical one. Headbomb {t · c · p · b} 15:17, 17 June 2019 (UTC)
- Events unplanned came up. Planning to do this this coming week Headbomb. --TheSandDoctor Talk 05:57, 17 June 2019 (UTC)
Civil parish bot
The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.
I am requesting to have a bot to create missing civil parishes in England, see User:Crouch, Swale/Bot tasks/Civil parishes (current) for the complete instructions/ideas and User:Crouch, Swale/Civil parishes for many of the missing CPs. The reasons in response to the common objections to bot created articles are in the "Process" section. I would at minimum include location and number of people in the parish but many other suggestions are there (and in particular the "Other ideas" section). As noted however Nomis does combine smaller parishes into larger ones and thus would likely be unsuitable therefore City Population would be better but it instead simply doesn't have data at all for small parishes (example) therefore either they could be left out or created without the population data (and I add the most recent data from Vision of Britain). I notified Wikipedia talk:WikiProject England#Bot created articles, Wikipedia talk:WikiProject UK geography#Bot created articles and Wikipedia talk:WikiProject Scotland#Bot created articles and although there was a question it was over listed buildings not parishes.
I intend to have the articles created at something like 6 a day (so its a manageable amount) that (especially if required) I can check manually (and possibly improve them). This would mean that it would take about 5 months for this to be done. I don't know enough about how to code a bot but I can give my instructions to a bot operator.
I intend to have this started in about a month because I currently have a page move/page creation ban and that would make creating DAB pages and name fixes more difficult but even if my appeal fails I still intend to go ahead with this. Crouch, Swale (talk) 17:12, 18 June 2019 (UTC)
page creation ban
You probably should not be requesting a page creation bot then. --Izno (talk) 19:31, 18 June 2019 (UTC)- That's not a problem for pages created by someone else as a result of consensus, see [1] (SilkTork was one of the users who participated in the previous appeal anyway). In any case its quite possible that that restriction will be removed anyway. Crouch, Swale (talk) 19:34, 18 June 2019 (UTC)
- This would basically be WP:PROXYING. Wait for your appeal to be up and then come back. Even so, you will probably need to have a consensus reached at e.g. WP:VPPRO for a bot to create any articles. --Izno (talk) 20:00, 18 June 2019 (UTC)
- The point is to get consensus before this and that even if I can't manually create articles then this remains an option. Crouch, Swale (talk) 20:23, 18 June 2019 (UTC)
- The example at User:Crouch, Swale/Bot tasks/Civil parishes (current) looks very much like a stub; if all of the proposed articles will be similar in structure and content, then WP:FDB#Bots to create massive lists of stubs applies. Otherwise, WP:MASSCREATION. --Redrose64 🌹 (talk) 21:06, 18 June 2019 (UTC)
- The 2nd half of WP:FDB#Bots to create massive lists of stubs says "exceptions do exist, provided the database contains high-quality/reliable data, that individual entries are considered notable, and that the amount of stubs created can be reasonably reviewed by human editors. If you think your idea qualifies, run it by a WikiProject first" I believe that both of those points have been met and WP:MASSCREATION says "Any large-scale automated or semi-automated content page creation task must be approved at Wikipedia:Bots/Requests for approval" which this discussion is the start of but it also says "While no specific definition of "large-scale" was decided, a suggestion of "anything more than 25 or 50" was not opposed" and "Alternatives to simply creating mass quantities of content pages include creating the pages in small batches" in this case creating 6 a day batches clearly falls bellow this. Crouch, Swale (talk) 07:32, 19 June 2019 (UTC)
- None of the project pages you listed above have consensus that this is a good idea. This does look like you are trying to circumvent your article creation restrictions, as 6 articles per day is a lot more than 1 per week. Spike 'em (talk) 08:32, 19 June 2019 (UTC)
- None of the projects have concerns about CPs (after more than 2 weeks). And I have discussed this with one of the users who participated in the previous appeal who said "I'm quite happy for Crouch, Swale to present their ideas to others such as Begoon and Iridescent." and the fact that I have disclosed it here is likely to mean that its not proxying, had I not mentioned that it could have been. Crouch, Swale (talk) 16:39, 19 June 2019 (UTC)
- There seems to be 1 (one) supportive comment across the 3 projects, and even that expresses concerns:
I think the idea is good in principle. It might need a fair amount of tidying up though
. I'd expect more than this to show consensus that it is a good idea. Spike 'em (talk) 09:28, 20 June 2019 (UTC)- But no opposers (to the CP proposal) and WP:MASSCREATION says "While no specific definition of "large-scale" was decided, a suggestion of "anything more than 25 or 50" was not opposed" and "Alternatives to simply creating mass quantities of content pages include creating the pages in small batches or creating the content pages as subpages of a relevant WikiProject to be individually moved to public facing space after each has been reviewed by human editors". Also while I get the impression that the 1 article a week was to try to get me to create longer articles etc instead of many short ones the main reason for "1 article a week" seemed to be to prevent me from clogging up AFC. Crouch, Swale (talk) 18:53, 24 June 2019 (UTC)
- An alternative (that I pointed out to SilkTork) if this fails is to have the bot create them in draftspace. Crouch, Swale (talk) 19:32, 24 June 2019 (UTC)
- Comments such as
I'm very worried that it sounds like the desired "endgame" of this editor appears to be the rapid creation of about 1,000 articles in a narrow topic area
andthere is the possibility that you are just biding your time in order to unleash hundreds of civil parish stubs on Wikipedia which will need to be examined by someone to check if they are worthwhile
in your last ARCA appeal seem to have plenty of merit. WP:FDB#Bots to create massive lists of stubs also statesIf you think your idea qualifies, run it by a WikiProject first (...) to gain consensus for the idea
; lack of replies is not consensus. Spike 'em (talk) 20:53, 24 June 2019 (UTC)- Creating around 725 articles over the time frame of 5/6 months isn't that rapid and as noted the part of MASSCREATION generally applies to 25-50 pages a time and not smaller batches (6 at a time) even if we assume that the lack of replies doesn't constitute consensus. Crouch, Swale (talk) 08:12, 25 June 2019 (UTC)
- Comments such as
- There seems to be 1 (one) supportive comment across the 3 projects, and even that expresses concerns:
- None of the projects have concerns about CPs (after more than 2 weeks). And I have discussed this with one of the users who participated in the previous appeal who said "I'm quite happy for Crouch, Swale to present their ideas to others such as Begoon and Iridescent." and the fact that I have disclosed it here is likely to mean that its not proxying, had I not mentioned that it could have been. Crouch, Swale (talk) 16:39, 19 June 2019 (UTC)
- None of the project pages you listed above have consensus that this is a good idea. This does look like you are trying to circumvent your article creation restrictions, as 6 articles per day is a lot more than 1 per week. Spike 'em (talk) 08:32, 19 June 2019 (UTC)
- The 2nd half of WP:FDB#Bots to create massive lists of stubs says "exceptions do exist, provided the database contains high-quality/reliable data, that individual entries are considered notable, and that the amount of stubs created can be reasonably reviewed by human editors. If you think your idea qualifies, run it by a WikiProject first" I believe that both of those points have been met and WP:MASSCREATION says "Any large-scale automated or semi-automated content page creation task must be approved at Wikipedia:Bots/Requests for approval" which this discussion is the start of but it also says "While no specific definition of "large-scale" was decided, a suggestion of "anything more than 25 or 50" was not opposed" and "Alternatives to simply creating mass quantities of content pages include creating the pages in small batches" in this case creating 6 a day batches clearly falls bellow this. Crouch, Swale (talk) 07:32, 19 June 2019 (UTC)
- The example at User:Crouch, Swale/Bot tasks/Civil parishes (current) looks very much like a stub; if all of the proposed articles will be similar in structure and content, then WP:FDB#Bots to create massive lists of stubs applies. Otherwise, WP:MASSCREATION. --Redrose64 🌹 (talk) 21:06, 18 June 2019 (UTC)
- The point is to get consensus before this and that even if I can't manually create articles then this remains an option. Crouch, Swale (talk) 20:23, 18 June 2019 (UTC)
- This would basically be WP:PROXYING. Wait for your appeal to be up and then come back. Even so, you will probably need to have a consensus reached at e.g. WP:VPPRO for a bot to create any articles. --Izno (talk) 20:00, 18 June 2019 (UTC)
- That's not a problem for pages created by someone else as a result of consensus, see [1] (SilkTork was one of the users who participated in the previous appeal anyway). In any case its quite possible that that restriction will be removed anyway. Crouch, Swale (talk) 19:34, 18 June 2019 (UTC)
Get some clear consensus that this is a good idea and your article creation rights sorted as you have been asked above. Spike 'em (talk) 16:28, 25 June 2019 (UTC)
- Yes it doesn't say anything about a daily limit but it does explicitly make reference to "smaller batches". If I am checking each "batch" every day (or so) then that doesn't violate the letter or spirit of that guideline. As far as I'm aware the guideline is to prevent hundreds of articles being created that might contain errors or not meet the notability guidelines. As I will be checking them I'll notice any errors and other editors will likely do so to. But yes getting clearer consensus for this and clarity of my restrictions would be helpful. Crouch, Swale (talk) 18:50, 25 June 2019 (UTC)
- "Smaller batches" is a way of having a better chance at getting support when things could potentially be contentious. It does not negate the need for prior consensus before creation, but it might make consensus easier to get. Going "I want to create 1000 articles tomorrow!" vs going "Hey, about about we have a bot create 10 articles as drafts as a subpage of WP:PLANTS, see what the feedback is on them, if they need more work, etc... so the next 10 are easier to handle, ... and then we'll see if we get to a point where we're comfortable having the remaining articles get created directly in article space" or similar.
- Note the it might. People may decide this is too close to violating a page creation ban for comfort. Or maybe they'd be open to such a bot creating articles in the project space if someone other than you reviews the article before moving into mainspace. Or maybe people would be comfortable with the task as proposed. Headbomb {t · c · p · b} 01:36, 26 June 2019 (UTC)
- Please note, this BOTREQ is mentioned on Crouch, Swale's restrictions appeal. Spike 'em (talk) 08:52, 3 July 2019 (UTC)
- @Headbomb: I have now produced a more in depth instruction list (and simplified it for now) at User:Crouch, Swale/Bot tasks/Civil parishes (current)/Simple and produced an overview at User:Crouch, Swale/CP overview. I understand that its probably not clear to you so please to ask questions where needed. Crouch, Swale (talk) 17:22, 21 July 2019 (UTC)
- Things don't need to be clear to me, they need to be clear to the community / the bot coder. It's clear this idea is too premature for a BRFA, although someone may wish to work with you and figure out what it is you want to do exactly. But right now, whatever bot coder would want to take the task would not even know where to begin to look for the information, or what the template for those articles would be. For example, the first line can be broken down to something like
PARISH
is a civil parish in theDISTRICT
district, in the county ofCOUNTY
, England. The bot coder would need to know where to getPARISH
,DISTRICT
andCOUNTY
from. Then every sentence needs to be broken down like that, as well as every element of the infobox. Figure that out first, then your idea might get some traction. Headbomb {t · c · p · b} 19:37, 21 July 2019 (UTC)- @Headbomb: I have added the suggested code to User:Crouch, Swale/Bot tasks/Civil parishes (current)/Coded (unfortunatelly it breaks the infobox but it shows you what is variable) but the footnotes at User:Crouch, Swale/Bot tasks/Civil parishes (current)/Simple explain what the variable content is and the sections below clarify it further. One thing is, can the bot look at the area and at the City Population and work out the centre point or do we need to use the coordinates from the OS? That is to say can the bot work out the centre point for the "Rattlesden" entry and add coordinates from it. Crouch, Swale (talk) 13:02, 26 July 2019 (UTC)
- I'm pretty sure not all parishes are located 15 miles northwest of X, and that not all parishes have 59 features. And most importantly, you're not telling where the bot would get any of that information. Headbomb {t · c · p · b} 19:16, 26 July 2019 (UTC)
- @Headbomb: thanks I've fixed that. The distance as explained is the coordinates for the county town to the coordinates for the parish. The question is can the bot produce coordinates from the centre point of the parish (from City Population), if it can't then the coordinates from the OS can be used, either way the coordinates of the parish will be the location for the purpose of distance. And the number of listed buildings comes from the British Listed Buildings website. Crouch, Swale (talk) 12:40, 30 July 2019 (UTC)
- I'm pretty sure not all parishes are located 15 miles northwest of X, and that not all parishes have 59 features. And most importantly, you're not telling where the bot would get any of that information. Headbomb {t · c · p · b} 19:16, 26 July 2019 (UTC)
- @Headbomb: I have added the suggested code to User:Crouch, Swale/Bot tasks/Civil parishes (current)/Coded (unfortunatelly it breaks the infobox but it shows you what is variable) but the footnotes at User:Crouch, Swale/Bot tasks/Civil parishes (current)/Simple explain what the variable content is and the sections below clarify it further. One thing is, can the bot look at the area and at the City Population and work out the centre point or do we need to use the coordinates from the OS? That is to say can the bot work out the centre point for the "Rattlesden" entry and add coordinates from it. Crouch, Swale (talk) 13:02, 26 July 2019 (UTC)
- Things don't need to be clear to me, they need to be clear to the community / the bot coder. It's clear this idea is too premature for a BRFA, although someone may wish to work with you and figure out what it is you want to do exactly. But right now, whatever bot coder would want to take the task would not even know where to begin to look for the information, or what the template for those articles would be. For example, the first line can be broken down to something like
Offhand I don't think articles are needed for every civil parish. For one thing, civil parishes might better be covered in list-articles, with only significant ones ever broken out separately. I noticed mention of this discussion by Crouch, Swale at User talk:Multichill, where Crouch Swale asserts that Multichill ran a bot for Scottish listed buildings, but it appears that was a bot to make lists of Scottish listed buildings instead. I am not a frequent reader or editor here, but I resent the fact that this editor is forcing this discussion here. This discussion appears to be a violation of a topic ban, and I think this should be closed, and the infraction should be reported centrally (not sure where, but should this be brought to ARBCOM?) --Doncram (talk) 23:06, 31 July 2019 (UTC)
- Civil parishes are legally recognized places for the purpose of WP:GEOLAND so should be expected to have articles. If anything the lists of listed buildings could be covered in the parish articles but the convention does seem to be that separate lists can be created. The bot request was discussed at ARCA anyway where it was pointed out that my existing restrictions don't need to be removed to allow the bot request. Crouch, Swale (talk) 17:36, 1 August 2019 (UTC)
- I do not agree with assertion that parishes are uniformly notable; most would in fact be disputed. There have been numerous AFDs ending "delete" about parishes and/or church+parish combo articles.
- As a disputed topic area, this is entirely inappropriate for a bot. --Doncram (talk) 21:29, 3 August 2019 (UTC)
- @Doncram: civil parishes (that is to say those that are census areas) similar to French communes as opposed to ecclesiastical parishes (and grouped parishes) don't seem to have been disputed, what AFDs are you referring to? The only one that I can think of is Raydon that was closed early with unanimous consensus to keep (although Raydon is a village to). Crouch, Swale (talk) 16:22, 4 August 2019 (UTC)
Clone of RonBot #11
User:RonBot and its creator User:Ronhjones have not been active since the first week of April this year, as others have noted at Wikipedia:Administrators'_noticeboard/Archive309#User:RonBot. When I posted about this at Wikipedia:Bots/Noticeboard#User:RonBot_#11, xaosflux attempted to email User:Ronhjones, but found that the email had been disabled.
RonBot #11 searched declined AfC submissions for biographies of women, and added newly declined drafts to Wikipedia:WikiProject Women in Red/Drafts once a week. I was able to use it to develop over a dozen declined drafts to acceptable articles about notable women, and other WiR members used it too. Without the bot, there has been no way of identifying drafts relevant to the Women in Red project for the last 3 months.
Would someone please be able to clone this bot, so that we can easily access declined drafts of women's biographies again?
The BRFA has the source code attached. Many thanks, RebeccaGreen (talk) 06:08, 10 July 2019 (UTC)
- Doing... Galobtter (pingó mió) 03:11, 22 July 2019 (UTC)
- RebeccaGreen, BRFA filed Galobtter (pingó mió) 06:48, 22 July 2019 (UTC)
- Galobtter, thank you very much! RebeccaGreen (talk) 07:45, 22 July 2019 (UTC)
Bot to notify draft authors of possible G13 deletion
Hey folks, is anyone interested in putting together a bot to message draft authors whose drafts will soon be eligible for G13? IME there are a good number of editors who create drafts, forget about them, but decide they want them when reminded. I'm speculating that automatic reminders (e.g. 1 week before G13-eligibility) would help cut down the number of unnecessary WP:REFUND/G13 requests. -FASTILY 08:39, 10 June 2019 (UTC)
- I assume this is to take over HasteurBot's task? Primefac (talk) 15:20, 10 June 2019 (UTC)
- Yes, partially. HasteurBot's task was wider in scope, in that it also nominated drafts for deletion. I'm just interested in the notifications bit. -FASTILY 20:58, 10 June 2019 (UTC)
Renaming over 300 links towards a political party
New Right |
Jewish Home |
Tkuma | Otzma Yehudit |
---|---|---|---|
New Right |
URWP | ||
United Right | |||
The issue was first raised here.
In Israel, political parties split and merge very rapidly. On 20 February, three parties fused together to create a list called "Union of the Right-Wing Parties" (URWP).
However, since there is no official name in English, but many shortenings in the international press, Wikipedians opted for "United Right (Israel)", which is how the article is called and the links as well.
Curse our luck: on 29 July the URWP decided to fuse with yet another party called "New Right", under the title... "United Right".
In the case of mergers, the practice is to have different articles.
So the party which is referred to as "United Right" by over 300 links bears the name of the party... in which it has merged.
We need a massive renaming of these 300+ links, and quite rapidly as new links in the coming days and weeks will likely refer to the correct "United Right".
Kahlores (talk) 16:23, 29 July 2019 (UTC)
- This is a job for WP:AWB, not a bot. Number 57 17:31, 29 July 2019 (UTC)
- Splitters! --Redrose64 🌹 (talk) 22:14, 29 July 2019 (UTC)
- @Kahlores: to what target do the links to United Right (Israel) need to be changed? bd2412 T 00:56, 30 July 2019 (UTC)
- It depends on the case; some of the them should redirect to Union of the Right-Wing Parties (URWP). For example, here: [2]. However, as you can see here: [3], only one link had to be changed to URWP (the national affiliation stayed the same.)
- @Kahlores: to what target do the links to United Right (Israel) need to be changed? bd2412 T 00:56, 30 July 2019 (UTC)
- The "Pages that link to "United Right (Israel)" page [4] isn't entirely correct, though. For example, it states that United Right (Israel) is linked on the Eitan Cabel article, but there is no link for United Right there.
- It's probably best to go through it manually. David O. Johnson (talk) 05:08, 30 July 2019 (UTC)
- There is a link to the party on the {{Current MKs}} navbox, so anything that transcludes that will show on the list of links. If the navbox is amended then it will probably remove half the articles and make it easier to do the rest by hand. From what you say, whoever goes through the articles will need to know the context the link is being used in, so is not really a suitable bot / AWB job. Spike 'em (talk) 08:00, 30 July 2019 (UTC)
- I've done that, and it reduced the number of incoming links (from articles) to 11. Number 57 09:06, 30 July 2019 (UTC)
- There is a link to the party on the {{Current MKs}} navbox, so anything that transcludes that will show on the list of links. If the navbox is amended then it will probably remove half the articles and make it easier to do the rest by hand. From what you say, whoever goes through the articles will need to know the context the link is being used in, so is not really a suitable bot / AWB job. Spike 'em (talk) 08:00, 30 July 2019 (UTC)
- It's probably best to go through it manually. David O. Johnson (talk) 05:08, 30 July 2019 (UTC)
- It seems like the problem has been solved. No articles link towards UR while talking about URWP. Thank you all. Kahlores (talk) 16:32, 30 July 2019 (UTC)
Redirects to garments
I wish someone with a bot could make a bulleted list of redirects related to garments, as described below, and place it at User:Iceblock/Garments.
- If Xxxx (clothing) exists, but Xxxx (garment) does not exist, then add both to the list.
- If Yyyy (garment) exists, but Yyyy (clothing) does not exist, then add both to the list.
- If Zzzz (clothing) redirects to another page than Zzzz (garment) does, then add both to the list.
The list could for instance look like this:
I know that redirects from (garment) not always should be created and targeted to the same title ending with (clothing), as pages ending with (clothing) might be brands and companies, but the bot does not need to check for this. Iceblock (talk) 18:52, 21 July 2019 (UTC)
- @Iceblock: I've created User:Iceblock/Garments which I hope is what you need. Certes (talk) 10:23, 27 July 2019 (UTC)
- Thank you very much for creating this! I first thought about redirects and creating new redirects, and I didn't think of the case when articles exists instead of redirects. If you have time, I would appreciate it if you could expand the list with all pages and not only redirects. I also ask if you could insert the source code of the query on the page so that another editor more easily can refresh the list some other time. Sorry for my late thanks. Iceblock (talk) 17:02, 6 August 2019 (UTC)
- I've added the articles. The only overlap is Bib and Thong, already identified from the redirect search. No Xxx has articles on both Xxx (clothing) and Xxx (garment). There isn't really any source code. I simply searched for clothing and garment and manipulated the results in a text editor. Certes (talk) 17:26, 6 August 2019 (UTC)
- Thank you again! This is great! Iceblock (talk) 18:05, 6 August 2019 (UTC)
- I've added the articles. The only overlap is Bib and Thong, already identified from the redirect search. No Xxx has articles on both Xxx (clothing) and Xxx (garment). There isn't really any source code. I simply searched for clothing and garment and manipulated the results in a text editor. Certes (talk) 17:26, 6 August 2019 (UTC)
- Thank you very much for creating this! I first thought about redirects and creating new redirects, and I didn't think of the case when articles exists instead of redirects. If you have time, I would appreciate it if you could expand the list with all pages and not only redirects. I also ask if you could insert the source code of the query on the page so that another editor more easily can refresh the list some other time. Sorry for my late thanks. Iceblock (talk) 17:02, 6 August 2019 (UTC)
New election article name format
Since earlier this year, a new naming convention for articles on elections, referendums, etc, has been established. Very many articles link to election articles, and after the page moves, very many articles are now linking to what are now redirects. This of course works fine, but I assume it's less economic on the servers when it's done in that scale, because it means one extra access.
Another thing is that the "What links here" function only displays (indented) the first 500 – or so it seems – of those articles linking to the redirects. In many cases, there are thousands of articles linking to the redirect, and thus all of these do not show. Fixing links in templates is one thing, but links that are placed in articles need to be fixed too.
All these links cannot be fixed automatically, because it may cause awkward wording and/or punctuation, but one thing that actually can be fixed is piped links, because editing those doesn't change wording or punctuation. So I'm suggesting the following changes to be done be a suitable edit bot:
- from the previos naming convention
[[United Kingdom general election, nnnn|
to the new one[[nnnn United Kingdom general election|
, where nnnn = year of election (or month and year of election).
Note the pipe sign.
It's preferable if the bot can edit all occurrences in the same article, regardless of year, in a single edit.
This can of course be applied to other types of elections after this initial batch.
HandsomeFella (talk) 09:43, 3 June 2019 (UTC)
- @HandsomeFella: I'd be happy to do this, but is there consensus for such edits? See WP:NOTBROKEN --DannyS712 (talk) 09:51, 3 June 2019 (UTC)
- I'm aware of WP:NOTBROKEN, but usually redirects do not have 500+ incoming links, resulting in most of them being "out of sight".
- Where do you suggest I can get more input on this?
- HandsomeFella (talk) 12:11, 3 June 2019 (UTC)
- You can ask for input on the relevant WikiProject for elections. Side comment on WP:NOTBROKEN, none of the bullet points listed there is actually relevant to this scenario, where one pipped link is being replaced with another, where the former is from an older style. --Gonnym (talk) 11:15, 4 June 2019 (UTC)
- I think the more-interesting item is WP:DWAP. --Izno (talk) 12:28, 3 June 2019 (UTC)
- Wikipedia is regularly asking for donations. Also, as I said above, that is not the big problem, rather a bonus. The problem is that only 500 articles linking to a redirect are visible, despite there being thousands more.
- HandsomeFella (talk) 12:48, 3 June 2019 (UTC)
- If you go need to see all links to the redirect, then go to the redirected page and you can view the "What links here" from there. (e.g. Special:WhatLinksHere/United_Kingdom_general_election,_2010) Spike 'em (talk) 13:46, 3 June 2019 (UTC)
- I know that of course. But 1) that's a little backwards, and 2) people might not know that only 500 entries are shown. In fact, I didn't realize that myself until recently, when I counted the articles listed. I found that exactly 500 were too even a number to be a co-incidence. You can't expect people – readers, not necessarily editors – know that. I bet far from all editors know that.
- HandsomeFella (talk) 14:47, 3 June 2019 (UTC)
- You seem to be making a BOTREQ to deal with (possible) shortcomings in other areas of WP. If you think the display of "What links here" is wrong / confusing then you should take that up with whoever maintains that. Spike 'em (talk) 10:35, 4 June 2019 (UTC)
- If you go need to see all links to the redirect, then go to the redirected page and you can view the "What links here" from there. (e.g. Special:WhatLinksHere/United_Kingdom_general_election,_2010) Spike 'em (talk) 13:46, 3 June 2019 (UTC)
- On a related point, is running a BOT to fix MOS:NOPIPE failures appropriate? Using the 2010 UK election as an example again, I've found some occurrences of [[United Kingdom general election, 2010|2010 United Kingdom general election]] and [[United Kingdom general election, 2010|2010 UK general election]] which I think are valid to fix? Spike 'em (talk) 12:55, 4 June 2019 (UTC)
- At least the first one should be uncontroversial. HandsomeFella (talk) 21:34, 5 June 2019 (UTC)
- But MOS:NOPIPE is talking about distinct sub-topics that are redirected to a parent article, as a way of potentially demonstrating that an article on the sub-topic is needed. There's no way we would ever want to have distinct articles about United Kingdom general election, 2010 and 2010 United Kingdom general election; one should always be a redirect. Nyttend (talk) 19:51, 15 June 2019 (UTC)
- Ah, there may be a point I missed in MOS:NOPIPE, as it does mention using a redirected term directly rather than a piped link. What is happening here is that a piped link of the form
[[redirect|target]]
which seems to go against the similarly named WP:NOPIPE, which says to keep links as short as possible. Spike 'em (talk) 09:27, 19 June 2019 (UTC)
- Ah, there may be a point I missed in MOS:NOPIPE, as it does mention using a redirected term directly rather than a piped link. What is happening here is that a piped link of the form
- But MOS:NOPIPE is talking about distinct sub-topics that are redirected to a parent article, as a way of potentially demonstrating that an article on the sub-topic is needed. There's no way we would ever want to have distinct articles about United Kingdom general election, 2010 and 2010 United Kingdom general election; one should always be a redirect. Nyttend (talk) 19:51, 15 June 2019 (UTC)
- At least the first one should be uncontroversial. HandsomeFella (talk) 21:34, 5 June 2019 (UTC)
Russia district maps
Replace image_map
with {{Russia district OSM map}} for all the articles on this list, as in this diff. The maps are already displayed in the articles, but currently this is achieved through a long switch function on {{Infobox Russian district}}; transcluding the template directly would be more efficient.--eh bien mon prince (talk) 11:58, 11 April 2019 (UTC)
- @Underlying lk: should be pretty similar to the German maps, right? --DannyS712 (talk) 22:31, 11 April 2019 (UTC)
- Yes pretty much. In fact, the German template is based on this one.--eh bien mon prince (talk) 13:26, 12 April 2019 (UTC)
- @Underlying lk: I can do this. I have a few BRFAs currently open, but once some finish I'll file one for this task --DannyS712 (talk) 04:20, 14 April 2019 (UTC)
- @DannyS712: any progress on this?--eh bien mon prince (talk) 19:51, 26 May 2019 (UTC)
- @Underlying lk: Will do this weekend, sorry --DannyS712 (talk) 19:52, 26 May 2019 (UTC)
- @DannyS712: any updates on this and the Germany template? If you're too busy at the moment, perhaps someone else can take over.--eh bien mon prince (talk) 03:46, 5 June 2019 (UTC)
- @Underlying lk: Sorry, I've been sick and really busy IRL. I'll do both next week --DannyS712 (talk) 07:08, 5 June 2019 (UTC)
- @DannyS712: Any news?--eh bien mon prince (talk) 16:35, 29 June 2019 (UTC)
- @Underlying lk: I'm waiting until the german one is done, and I see you've responded on that, so it should be soon. --DannyS712 (talk) 16:37, 29 June 2019 (UTC)
- @DannyS712: Any news?--eh bien mon prince (talk) 16:35, 29 June 2019 (UTC)
- @Underlying lk: Sorry, I've been sick and really busy IRL. I'll do both next week --DannyS712 (talk) 07:08, 5 June 2019 (UTC)
- @DannyS712: any updates on this and the Germany template? If you're too busy at the moment, perhaps someone else can take over.--eh bien mon prince (talk) 03:46, 5 June 2019 (UTC)
- @Underlying lk: Will do this weekend, sorry --DannyS712 (talk) 19:52, 26 May 2019 (UTC)
- @DannyS712: any progress on this?--eh bien mon prince (talk) 19:51, 26 May 2019 (UTC)
- @Underlying lk: I can do this. I have a few BRFAs currently open, but once some finish I'll file one for this task --DannyS712 (talk) 04:20, 14 April 2019 (UTC)
- Yes pretty much. In fact, the German template is based on this one.--eh bien mon prince (talk) 13:26, 12 April 2019 (UTC)
A bot to update and maintain Wikipedia:WikiProject Missing encyclopedic articles
Hello,
WikiProject Missing encyclopedia articles maintenance has been very inactive with several categories not getting updated. I am currently working on updating the Wikipedia:WikiProject Missing encyclopedic articles/List of US Newspapers area, but there are several other areas that have not been touched in years. If you take a look at the lists of missing articles, you see that there are blue links and areas that are not updated. The Wikipedia:WikiProject Missing encyclopedic articles/Progress is also rarely updated.
I propose a bot that could update all of these missing article lists. Most of the areas are sorted by state and updated 50 different sections per category is a ton of work. A bot will keep the project more fresh. In addition, the Progress page I linked above could also be updated.
Wikipedia:WikiProject Missing encyclopedic articles is losing people and we need a way to jumpstart the project.
Thank you AmericanAir88(talk) 16:57, 27 June 2019 (UTC)
- @AmericanAir88: I could probably do this (A slightly similar task at Wikipedia:Bots/Requests for approval/DannyS712 bot 18) but what specifically are you looking for in terms of edits? Can you link to some diffs? --DannyS712 (talk) 17:51, 27 June 2019 (UTC)
- @DannyS712: Thank you very much. Here are some edits:
- [5] Me updating the Maryland section of missing newspapers by deleting blue links
- [6] Me updating the grand total of missing newspapers after updating the Maryland section.
- [7] Me updating the statistics list. AmericanAir88(talk) 18:11, 27 June 2019 (UTC)
- @AmericanAir88: Okay. The first edit you linked too should be fairly easy to automate, and I'll submit a BRFA ideally in the next few hours. The 2nd could be automated to use a module, eg the conversion I made in Special:Diff/891799155. I can try to make/find one for you if you want, or I can try to code that too. The last part is the hardest - would it still be helpful to only do part 1 (or only 1 and 2)? --DannyS712 (talk) 18:19, 27 June 2019 (UTC)
- @DannyS712: Any help would be great. Thank you. AmericanAir88(talk) 18:40, 27 June 2019 (UTC)
One of the issues here is to avoid false positives. Simply having a blue link does not mean there is an article about the subject. It could be a redirect, or a different subject with the same name. If it is a redirect, the topic may or may not be sufficiently well covered. All the best: Rich Farmbrough, 10:55, 28 June 2019 (UTC).
- thanks for the note, already accounted for (also filtering out disambiguation pages, just in case) --DannyS712 (talk) 20:18, 28 June 2019 (UTC)
"Validation of the new Hipparcos reduction"
This publication is cited in some 2600 Wikipedia articles but the individual citations are of uneven quality. A template holding the citation exists, {{R:Van Leeuwen 2007 Validation of the new Hipparcos reduction}} (possibly badly categorized at the moment), and could be substituted for the bit between the <ref> tags in the aforementioned 2600+ articles. (Or could Wikidata be leveraged instead?). Urhixidur (talk) 19:55, 28 June 2019 (UTC)
- Templated citations kinda suck for a bunch of reasons. Good to have it as a guide, but include the full citation in the wikitext. IMO -- GreenC 20:03, 28 June 2019 (UTC)
- Make it a subst: template that a bot would regularly use to fix new occurrences? Urhixidur (talk) 20:07, 28 June 2019 (UTC)
- See this discussion for why the Hipparcos reduction does not have a template. Cross-posting a notice of this discussion at WT:AST might be appropriate. Primefac (talk) 12:24, 29 June 2019 (UTC)
- How about substituting
{{cite Q|Q28315126}}
for it? Urhixidur (talk) 21:12, 3 July 2019 (UTC)
- Make it a subst: template that a bot would regularly use to fix new occurrences? Urhixidur (talk) 20:07, 28 June 2019 (UTC)
Fix or tag references that were incorrectly copied along due to a VisualEditor bug
How feasible would it be to try to fix the 1,800 articles containing references that were incorrectly copied over? This happens when text is copied from an article (without entering edit mode) and is pasted into VisualEditor. An edit filter is being tested here.
This might be solved by finding when the malformed reference was added and seeing which reference had that number at the time, but that sounds like a very difficult task.
Another option would be to tag them with "[full citation needed]". These malformed references are almost impossible to distinguish from the rest, and it's difficult to fix them after some time has passed.
– Thjarkur (talk) 21:02, 24 August 2019 (UTC)
- Given this (from France added here):
<sup>[[Asylum in France#cite%20note-7|[7]—]]</sup>
- is the fix change to:[[Asylum in France]]
? -- GreenC 03:51, 26 August 2019 (UTC)
- The fix would be to change
<sup>[[Asylum in France#cite%20note-7|[7]—]]</sup>
to<ref>{{Cite journal|title=Asylum Seekers, Violence and Health|...}}</ref>
which was reference number 7 at the time in that article. – Thjarkur (talk) 10:59, 26 August 2019 (UTC)
- The fix would be to change
- To clarify, given this original text from this diff (last diff):
In 2010, France received about 48,100 asylum applications—placing it among the top five asylum recipients in the world<sup>[[Asylum in France#cite%20note-7|[7]—]]</sup>and in subsequent years it saw the number of applications increase, ultimately doubling to 100,412 in 2017.<ref>{{Cite web|url=https://www.asylumineurope.org/sites/default/files/report-download/aida_fr_2017update.pdf|title=aida - Asylum Information Database - Country Report: France|last=|first=|date=2017|website=|archive-url=|archive-date=|dead-url=|access-date=}}</ref>
- Replace
<sup>[[Asylum in France#cite%20note-7|[7]—]]</sup>
to<ref>{{Cite journal|title=Asylum Seekers, Violence and Health|...}}</ref>
. So it will look like this:In 2010, France received about 48,100 asylum applications—placing it among the top five asylum recipients in the world<ref>{{Cite journal|title=Asylum Seekers, Violence and Health|...}}</ref> and in subsequent years it saw the number of applications increase, ultimately doubling to 100,412 in 2017.<ref>{{Cite web|url=https://www.asylumineurope.org/sites/default/files/report-download/aida_fr_2017update.pdf|title=aida - Asylum Information Database - Country Report: France|last=|first=|date=2017|website=|archive-url=|archive-date=|dead-url=|access-date=}}</ref>
- With the new text being note #7 from Asylum in France as-of the date of the diff in the France article (August 20). Woah. -- GreenC 06:52, 27 August 2019 (UTC)
- This is kind of similar to Anomie's orphaned reference fixer. Both would have to look for references in old revisions so I think it should be doable at least. It seems like a good bot task, we just need someone to make it, which won't be me. --Trialpears (talk) 09:28, 27 August 2019 (UTC)
- I am now Coding... but really
{{BOTREQ|Abandon all hope, ye who enter here}}
. -- GreenC 13:41, 27 August 2019 (UTC)- This task is a bit devilish, the next best thing would be a quick but ugly change from
<sup>[[Asylum in France#cite%20note-7|[7]—]]</sup>
to<ref>Missing citation – This reference was incorrectly copied from citation number 7 in the article Asylum in France. The order of the citations may have changed since then.</ref>
– Thjarkur (talk) 15:23, 27 August 2019 (UTC)- {{Incomplete short citation}} also exists; unfortunately it does not currently take any parametre. Perhaps adding a
|details=
parametre to that template and have the bot put|details=This reference was incorrectly copied from citation number 7 in the article Asylum in France. The order of the citations may have changed since then.
works?Jo-Jo Eumerus (talk, contributions) 07:36, 28 August 2019 (UTC)- That could work as last resort. The bot is basically done/solved except for one problem. -- GreenC 14:46, 28 August 2019 (UTC)
- {{Incomplete short citation}} also exists; unfortunately it does not currently take any parametre. Perhaps adding a
- This task is a bit devilish, the next best thing would be a quick but ugly change from
- I am now Coding... but really
- This is kind of similar to Anomie's orphaned reference fixer. Both would have to look for references in old revisions so I think it should be doable at least. It seems like a good bot task, we just need someone to make it, which won't be me. --Trialpears (talk) 09:28, 27 August 2019 (UTC)
- To clarify, given this original text from this diff (last diff):
- BRFA filed -- GreenC 18:32, 29 August 2019 (UTC)
Mainspace is cleared out, about 3,000 cites. There are still about 149 articles in Draft: and around 900 in User: .. but pages in this space are causing the bot problems. Apparently the VE Bug is now fixed at the source (?) but there are other ways it can get back into the system. I'll have to monitor and re-run the bot maybe once a month or so for a while. Done for now. -- GreenC 14:36, 5 September 2019 (UTC)
Unbreak a batch of PDF links
The North Carolina Department of Natural and Cultural Resources redid its website some time back, breaking a lot of links (maybe about 3000?) from Wikipedia in the process. The filenames are identical as far as I've seen (they reflect the department's internal numbering system); only the filepaths are different. Example:
- http://www.hpo.ncdcr.gov/nr/MK0090.pdf — old URL format #1
- https://www.ncdcr.gov/state-historic-preservation-office/nr/MK1809.pdf — old URL format #2
- https://files.nc.gov/ncdcr/nr/MK0090.pdf — correct form of URL #1
- https://files.nc.gov/ncdcr/nr/MK1809.pdf — correct form of URL #2
Could someone run a bot to replace the old syntax with the new? Presumably the bot needs to find each occurrence of the old syntax, check the corresponding new URL to ensure that there's a PDF file with that name, replace old with new if the new link works, and log the items whose new-syntax links don't work. Also, by this time, some of the URLs have probably been tagged with {{dead link}}, so if the bot replaces a link, it should remove any associated appearances of this template. Nyttend (talk) 23:54, 8 September 2019 (UTC)
- Nyttend, could you submit this to WP:URLREQ? Copy/move the thread. Can do custom URL moves like this. Taking into account archive URLs (add, change, delete), header checks and other stuff. -- GreenC 01:55, 10 September 2019 (UTC)
- OK, done per your instructions. Nyttend (talk) 02:09, 10 September 2019 (UTC)
Changed closed DeepDotWeb links to web archive links
Earlier this year popular dark web news and links site DeepDotWeb was seized for money laundering. It won't be back. I cited https://deepdotweb.com/ in various places on Wikipedia and would prefer not to have to fix them all by hand. An example fix:
- Article: Doxbin
- Change from: https://www.deepdotweb.com/2015/04/15/so-you-want-to-be-a-darknet-drug-lord/
- To: https://web.archive.org/web/20190326163337/https://www.deepdotweb.com/2015/04/15/so-you-want-to-be-a-darknet-drug-lord/
The bot should pick the latest snapshot prior to May 7th 2019 which is when the site was seized.
I've never done a bot request before so I hope this is an appropriate request format! Deku-shrub (talk) 10:36, 17 August 2019 (UTC)
- @Deku-shrub: Go to Cyberpower678's talk page and request that IABot treat the site as dead. --Izno (talk) 11:58, 17 August 2019 (UTC)
- Cheers, will do Deku-shrub (talk) 12:18, 17 August 2019 (UTC)
- @Deku-shrub and Izno: Cyberpower created a GUI interface for IABot so that anyone can do this task, no need to ask. -- GreenC 12:57, 17 August 2019 (UTC)
- @Deku-shrub and Izno: Oh sorry, I didn't read the above carefully, it says "bot should pick the latest snapshot prior to May 7th 2019". That is more complicated. This is something my bot might be able to do (?), I don't believe IABot could do it. How about let's make a request at WP:URLREQ since this is a custom one-time job. -- GreenC 13:11, 17 August 2019 (UTC)
- Ahh and it looks to be less than 30 links. Since this would require custom coding etc.. it would be less work to manually fix the links. This is probably not a good bot job. -- GreenC 13:17, 17 August 2019 (UTC)
- Most of them were archived, but I archived 8 links that were not yet. That should be it for enwiki. Remains setting the domain to Blacklisted in IABot database, but having trouble accessing the GUI interface at moment.. I'm guessing it was already marked dead/blacklisted, the reason there were a few left is IABot hadn't gotten around to those pages yet. -- GreenC 17:21, 17 August 2019 (UTC)
- GreenC, I've long blacklisted the domains and started a bot job. IABot will likely pull something prior to May as long as the access dates are prior to May as well. —CYBERPOWER (Chat) 19:48, 17 August 2019 (UTC)
- Most of them were archived, but I archived 8 links that were not yet. That should be it for enwiki. Remains setting the domain to Blacklisted in IABot database, but having trouble accessing the GUI interface at moment.. I'm guessing it was already marked dead/blacklisted, the reason there were a few left is IABot hadn't gotten around to those pages yet. -- GreenC 17:21, 17 August 2019 (UTC)
- Ahh and it looks to be less than 30 links. Since this would require custom coding etc.. it would be less work to manually fix the links. This is probably not a good bot job. -- GreenC 13:17, 17 August 2019 (UTC)
Bot to assemble a table of images sortable by aspect ratio.
I've been compiling lists of prehistoric life by US state. These articles contain a huge number of images with a variety of aspect ratios. It has been difficult to establish a consistent standard for "upright=" values by adjusting them manually for each image. I was hoping someone could make a bot that could scan the images on this page, calculate the aspect ratios based on each image's dimensions in their filespace pages, and then assemble a sortable two columned table on this page with the first column being the aspect ratios and the second being thumbnails of each relevant image. That way I can sort all the images I've been using and develop a standard for image aspect ratios should correspond with what "upright" values. Abyssal (talk) 14:23, 15 August 2019 (UTC)
- Abyssal, Done. This task doesn't appear to require frequent updates, so I ran the code in PAWS and copied the result manually. The code is here, let me know if you need me to run it again. --AntiCompositeNumber (talk) 04:45, 18 August 2019 (UTC)
- AntiCompositeNumberFantastic work, ACN. Could you help me with a related task? I was wondering if you could use a bot to take the numerical aspect ratio from all the sort values of the images in the [[table and plug them into some equations I'll have to get back with you about and then edit the code of the images in the table to display at "upright=*formula results*"? Abyssal (talk) 22:07, 18 August 2019 (UTC)
- Abyssal, Sure, let me know what equations you want to use and I'll do that for you. --AntiCompositeNumber (talk) 22:35, 18 August 2019 (UTC)
- Sorry, I haven't forgotten this, had some stuff come up IRL. I'll get back to you as quick as I can. Abyssal (talk) 13:10, 22 August 2019 (UTC)
- Abyssal, Sure, let me know what equations you want to use and I'll do that for you. --AntiCompositeNumber (talk) 22:35, 18 August 2019 (UTC)
- AntiCompositeNumberFantastic work, ACN. Could you help me with a related task? I was wondering if you could use a bot to take the numerical aspect ratio from all the sort values of the images in the [[table and plug them into some equations I'll have to get back with you about and then edit the code of the images in the table to display at "upright=*formula results*"? Abyssal (talk) 22:07, 18 August 2019 (UTC)
- @AntiCompositeNumber:Alright for images with an aspect ratio greater than 1.5, can you use the formula:
(1.15/(1+e-1.75(x-2.85)))+0.67
- and for images with an aspect ratio less than 1.5 can you use
(0.56/(1+e-8(x-0.55))) +0.18
Abyssal (talk) 20:44, 25 August 2019 (UTC)
- Abyssal, Which should be used if the aspect ratio equals 1.5? --AntiCompositeNumber (talk) 21:14, 25 August 2019 (UTC)
- @AntiCompositeNumber:Oops. Didn't think of that. You can just omit those altogether. Abyssal (talk) 21:37, 25 August 2019 (UTC)
- Abyssal, Done. Code is still here, new data is in the table at User:Abyssal/Aspect ratio table. --AntiCompositeNumber (talk) 22:43, 25 August 2019 (UTC)
- @AntiCompositeNumber:Oops. Didn't think of that. You can just omit those altogether. Abyssal (talk) 21:37, 25 August 2019 (UTC)
- @AntiCompositeNumber:Thanks! Could you replace the results from the "greater than 1.5" operation with results from a new formula? My last one didn't get the images quite right.
1.92611 + (-1417775 - 1.92611)/(1 + (x/0.000002840858)^1.063437)
Abyssal (talk) 22:22, 26 August 2019 (UTC)
- Abyssal, Done --AntiCompositeNumber (talk) 22:56, 26 August 2019 (UTC)
- @AntiCompositeNumber:Awesome. This looks pretty good. Could you match the filenames from the images in the aspect ratio table with their equivalents on this page and replace the upright values in the latter's image codes with the values you just produced with the formula? After that I'll be able to have John of Reading roll them out on the live articles. :D Abyssal (talk) 23:19, 26 August 2019 (UTC)
- @Abyssal: {botreq|done}} --AntiCompositeNumber (talk) 21:01, 27 August 2019 (UTC)
- @AntiCompositeNumber:Awesome. This looks pretty good. Could you match the filenames from the images in the aspect ratio table with their equivalents on this page and replace the upright values in the latter's image codes with the values you just produced with the formula? After that I'll be able to have John of Reading roll them out on the live articles. :D Abyssal (talk) 23:19, 26 August 2019 (UTC)
- @AntiCompositeNumber:Thanks, man. I hate to do this to you, but I have a request for a tweak. Could you replace the "less than 1.5" upright values on both pages with the values produced by the following new equation?
y = 0.7832717 + (0.1597199 - 0.7832717)/(1 + (x/0.4705958)^2.480882)
Abyssal (talk) 22:10, 27 August 2019 (UTC)
- @AntiCompositeNumber:Thanks, man! I'll have John roll this out on the articles. Abyssal (talk) 00:02, 28 August 2019 (UTC)
Bot to remove school mapframe **at a later date**
There is some code that I found that can add a map to the schools infobox:
| module = {{Infobox mapframe | stroke-color = #C60C30 | stroke-width = 3 | marker = school | marker-color = #1F2F57 | zoom = 13}} }}
I liked what this code could do, so I started adding it to some school infoboxes. User:Steven (Editor) told me that there were plans to replace this code with a built-in functionality in the infobox for schools itself so the code would be unnecessary. He stated that he would like for me to hold off on adding the code so there would be fewer instances of this to remove once the code is installed. I do not know when the installation of the built-in function is expected to occur.
I want to explore whether a bot can be used to remove the lines of code I posted here. If it can just automatically remove this, I can add the code without fear of having to remove it later once the built-in functionality is ready.
Thanks, WhisperToMe (talk) 20:48, 5 July 2019 (UTC)
- @WhisperToMe: - how many, like dozens, hundreds, thousands? -- GreenC 23:22, 9 July 2019 (UTC)
- I put in the template in say (just an uneducated guess) around 30 or so articles before Steven said he had plans to make it redundant and that he wasn't sure whether I should put in any more unless he knew whether a bot could automatically remove the template. Another user later told us a bot could do it. 23:25, 9 July 2019 (UTC)
- Yes, it is trivial, I could do it in a 1-line bot :) But didn't want to edit thousands of pages would be a lot of watchlist churn, dozens or hundreds is fine. -- GreenC 23:29, 9 July 2019 (UTC)
- I assume when you say "a 1-line bot" you mean "I'll do it manually" because a bot run for 30 pages is rather unnecessary. Primefac (talk) 23:50, 9 July 2019 (UTC)
- No a 1-line bot, faster than manually even for 30 pages. -- GreenC 01:54, 10 July 2019 (UTC)
- Basically this:
awk -ilibrary '{fp=sys2var("wikiget -w " shquote($0); sub(/<re_pattern>/,"",fp); print fp > "file.txt"; print sys2var("wikiget -E " shquote($0) " -S " shquote("Remove redundant with internal code") " -P file.txt") }' page-list.txt
just need to fill-in the regex pattern. For each article in page-list.txt, download the wikisource ("wikiget -w"), substitute the regex pattern with "" (sub()), and upload the result ("wikiget -E") with the edit summary (-S). -- GreenC 02:01, 10 July 2019 (UTC)- Fair enough, was just thinking that an AWB run with the "remove parameter" module implemented would work easily enough. Primefac (talk) 10:42, 13 July 2019 (UTC)
- The bot is mostly boilerplate it could apply to any search-replace, only the
sub(/<re_pattern>/,"",fp)
would change dependent on task. I've used AWB it's OK but mostly it is providing the page download and upload functionality. The action done to the page is more flexible in a script as you can add logic statements and do anything including checking URL headers etc.. that are not possible with AWB. So if you can do the page download and upload easily enough with a scripted bot, AWB becomes something of a limiting factor vs. a scripted bot which are easy to make (1 line of code mostly boilerplate). -- GreenC 16:12, 20 July 2019 (UTC)
- The bot is mostly boilerplate it could apply to any search-replace, only the
- Fair enough, was just thinking that an AWB run with the "remove parameter" module implemented would work easily enough. Primefac (talk) 10:42, 13 July 2019 (UTC)
- Basically this:
- No a 1-line bot, faster than manually even for 30 pages. -- GreenC 01:54, 10 July 2019 (UTC)
- I assume when you say "a 1-line bot" you mean "I'll do it manually" because a bot run for 30 pages is rather unnecessary. Primefac (talk) 23:50, 9 July 2019 (UTC)
- Yes, it is trivial, I could do it in a 1-line bot :) But didn't want to edit thousands of pages would be a lot of watchlist churn, dozens or hundreds is fine. -- GreenC 23:29, 9 July 2019 (UTC)
- I put in the template in say (just an uneducated guess) around 30 or so articles before Steven said he had plans to make it redundant and that he wasn't sure whether I should put in any more unless he knew whether a bot could automatically remove the template. Another user later told us a bot could do it. 23:25, 9 July 2019 (UTC)
Birth date and age
About 762 articles which use {{Infobox person}} or equivalents contain wikitext such as |birth_date=1 May 1970 (age 49)
. (Search) Some ages are wrong; others may become wrong on the subject's next birthday. Would it be a good idea for a bot to convert this to |birth_date={{Birth date and age|1970|05|01}}
, both as a one-off run and on a regular basis for new occurrences? It may also be useful to produce an error report of alleged dates that the bot can't decipher. Ideally, the code should be flexible enough to add similar tasks later. For example, we might want to extend it to {{Infobox software}} with |released={{Start date and age|...}}
, though I think that particular case would catch only Adobe Flash Player. Certes (talk) 01:10, 4 June 2019 (UTC)
- @Certes: Doing... with AWB, at least to test it out DannyS712 (talk) 01:27, 4 June 2019 (UTC)
- @Certes: done a few hundred, but the remaining ones are in a different format DannyS712 (talk) 01:57, 4 June 2019 (UTC)
- DannyS712, nice work! We're down to 430 search results. If you could tweak your AWB patterns a bit, you could probably catch quite a few more. I'm seeing the following common formats:
- "22 August 1988(age 30)" (no space); "1961 (age 57-58)" (year only; use {{Birth year and age}}); "February 8, 1962 (aged 56)" (note MDY format and "aged" instead of "age"); "July 30, 1948<br> (age 70)" (br tag; some are closed with a slash). If you were able to fix these, I suspect that we'd be left with about 50 to clean up manually. – Jonesey95 (talk) 07:27, 4 June 2019 (UTC)
- Thanks Danny and Jonesey. I did a similar exercise with JWB a few years ago, so these cases have accumulated since then at about one per day. I was wondering whether it's worth doing with a bot on a regular basis. I also remember finding a few with "aged", br tags and similar clutter. There are also various date formats to parse, but I hope there's a standard library function for that somewhere. Certes (talk) 09:25, 4 June 2019 (UTC)
- @Certes: I'll do the rest of this batch in the next week, and for next time probably file a brfa DannyS712 (talk) 15:22, 4 June 2019 (UTC)
- Thanks Danny and Jonesey. I did a similar exercise with JWB a few years ago, so these cases have accumulated since then at about one per day. I was wondering whether it's worth doing with a bot on a regular basis. I also remember finding a few with "aged", br tags and similar clutter. There are also various date formats to parse, but I hope there's a standard library function for that somewhere. Certes (talk) 09:25, 4 June 2019 (UTC)
- @Certes and DannyS712: Thank you for your work on this. I see it too frequently. —МандичкаYO 😜 22:53, 4 July 2019 (UTC)
- There's also a few with Age rather than age (search) -- WOSlinker (talk) 22:22, 21 July 2019 (UTC)
- May as well not restrict the search to infobox person. (search) -- WOSlinker (talk) 13:25, 27 July 2019 (UTC)
- There's also a few with Age rather than age (search) -- WOSlinker (talk) 22:22, 21 July 2019 (UTC)
Tag communists and communist organizations with Wikiproject Socialism
I would like to tag articles about communists and communist organizations with {{WikiProject Socialism}}. Whenever they do not contain the banner already, they should start with importance=low. I created a list of categories whose articles would be safe to tag: Wikipedia:WikiProject Socialism/Categories. Some categories are not linked (and not recursed into) because they may lead to unrelated articles. Thank you! --MarioGom (talk) 19:44, 15 July 2019 (UTC)
- Categories themselves could also be tagged too. Note that there are many redundant categories. If you deduplicate them, you'll be left with 1,097 categories. --MarioGom (talk) 19:51, 15 July 2019 (UTC)
- This sounds like a job for @Anomie:'s WikiProject tagger. --Trialpears (talk) 16:39, 30 July 2019 (UTC)
- I can help with some coding or preparing the input list in a more suitable format if needed. --MarioGom (talk) 14:20, 17 August 2019 (UTC)
- MarioGom have you tried asking for a tag run at User talk:AnomieBOT? --Trialpears (talk) 15:11, 17 August 2019 (UTC)
- Trialpears: I just did, thanks. --MarioGom (talk) 17:02, 18 August 2019 (UTC)
- MarioGom have you tried asking for a tag run at User talk:AnomieBOT? --Trialpears (talk) 15:11, 17 August 2019 (UTC)
- I can help with some coding or preparing the input list in a more suitable format if needed. --MarioGom (talk) 14:20, 17 August 2019 (UTC)
- This sounds like a job for @Anomie:'s WikiProject tagger. --Trialpears (talk) 16:39, 30 July 2019 (UTC)
Bot to update economic statistics
I plan on making a bot that updates statistics at regular intervals. For example, by updating the latest GDP numbers or inflation numbers on the article Economy of the United States. Numbers will initially be retrieved from the Louis Fed's FRED API. Other sources of data could be added later.
I envision the typical flow will be:
- Retrieve a list of all pages using the associated template.
- Parse the pages and retrieve series identification and other relevant information from the templates. For example A191RL1Q225SBEA for quarterly US GDP from FRED.
- Retrieve the latest series values through APIs.
- Replace the old value with the latest value.
I started a similar project several years ago but never followed through. Please let me know if you have any thoughts or suggestions.--Bkwillwm (talk) 03:26, 6 July 2019 (UTC)
@Bkwillwm: My only thought is frequency of edits. If the GDP number was displayed via template, and the template pulled the number from a database (Wikidata, local template files, Wikicommons tabular data) then the bot is updating the database as frequently as desired without disturbing the article or making thousands of edits in mainspace. -- GreenC 20:03, 21 July 2019 (UTC)
navbox wikitable → wikitable
if you compare the fan polls section on mobile vs. non-mobile you will see that the table is missing in mobile. this is because the table is using "navbox wikitable" for the class. nothing with class navbox appears on mobile :( in this particular case, the navbox class is basically superfluous. it would be amazing if we could change all the pages using navbox wikitable to use just wikitable instead to avoid empty sections on mobile. there are probably more, but this is a start. Frietjes (talk) 16:00, 24 May 2019 (UTC)
- Well, a thing that is different is
Month | Winner | Other candidates |
---|---|---|
June | Bob | Others |
vs
Month | Winner | Other candidates |
---|---|---|
June | Bob | Others |
So it's not just a matter of blindly removing "navbox", which makes it very likely to be a WP:CONTEXTBOT, so an WP:AWB run is likely best over a bot. Could be wrong though. Maybe every instance is easily replaceable (with e.g. centering styles instead). Headbomb {t · c · p · b} 17:26, 27 June 2019 (UTC)
- in that case, just restrict the changes to places where it's followed by "width:100%" or "margin:[0-9]em auto". Frietjes (talk) 14:19, 4 August 2019 (UTC)
- I would advocate for removal of the extra styling without replacement. Additional styling is generally less accessible. --Izno (talk) 23:00, 7 August 2019 (UTC)
- I also think this navbox wikitable should be deprecated. I'm impartial to style but the current situation is simply unacceptable. --Trialpears (talk) 22:30, 8 August 2019 (UTC)
Fixing incorrect footnote formatting
This is a bit of a cosmetic task, so perhaps it could be added when a bot is doing other work already rather than as a stand-alone operation. But I've seen lots of footnotes that violate MOS:REFPUNCT by having a space before the footnote or a by having punctuation after it. I can't immediately think of any exceptions to the rule, although there are probably some. Could a bot help with this? - Sdkb (talk) 20:21, 2 August 2019 (UTC)
- This is already part of standard AWB typofixing, so they should be semi regularly fixed, but some help from the bots would always be welcome as well if it has high enough accuracy. --Trialpears (talk) 20:34, 2 August 2019 (UTC)
Complete merge of Template:WikiProject Patna
Per Wikipedia:Templates for discussion/Log/2019 September 1#Template:WikiProject Patna I would like to complete the merge of this template into {{WikiProject India}}. I did have a look at doing this with AWB but it doesn't seem to be up to the task (or at least, I'm not). There are 590 transclusions of {{WikiProject Patna}}, these can all be replaced as so:
- If {{WikiProject India}} is already present, then add
|patna=yes
and|patna-importance=
using the importance rating from {{WikiProject Patna}}: example edit
otherwise (and there may not be many of these)
- Convert {{WikiProject Patna}} to {{WikiProject India}} and add
|patna=yes
and|patna-importance=
duplicating the existing importance rating: example edit
Let me know if anything else is needed. PC78 (talk) 11:20, 15 September 2019 (UTC)
- PC78 I've made a substitutable version of {{WikiProject Patna}} in it's sandbox which could be used when {{WikiProject India}} isn't present. Some more testing should be done, but I believe it's functional. I also have some regex for AWB from a previous merger that should work here as well. I do not trust it unsupervised though, so make sure to properly review all edits if you decide on using it. Search for
({{WikiProject India[^}]*)((a|[^a])*)({{WikiProject Patna[^}]*\|importance=)([^}|=]*)([^}]*}})
and replace with$1|patna=yes|patna-importance=$5$2
if WP Patna is after WP India and use({{WikiProject India[^}]*)((a|[^a])*)({{WikiProject Patna[^}]*\|importance=)([^}|=]*)([^}]*}})
and$3$5|patna=yes|patna-importance=$2
if WP Patna is before WP India. --Trialpears (talk) 12:01, 15 September 2019 (UTC)
- Your regex looks good, I'm giving it a go now. Cheers! PC78 (talk) 14:20, 15 September 2019 (UTC)
- PC78 Done --Trialpears (talk) 17:05, 16 September 2019 (UTC)
- Your regex looks good, I'm giving it a go now. Cheers! PC78 (talk) 14:20, 15 September 2019 (UTC)
A Bot to update Portal's In the news section
I would like to have a bot update this page once daily and then transfer the resulting content to the target page here. Purpose: If I directly link the daily news feed generation code to the portal main page, the portal's main page script takes too long to run as it already has heavy scripts to generate the "Did you know.." (DYK) section. Unlike DYK the news does not need to update every time the reader refreshes the portal - so it can be a static post updated once a day. So, I'd like to run the news generation code only once daily and only link the resulting output to the portal main page. If this cane be done for one portal (i.e. Portal:Asia), same/similar bot can be used for other portals as well. Arman (Talk) 09:03, 29 August 2019 (UTC)
- @Armanaziz: User:JL-Bot/Project content with
|content-mainpage-in-the-news
may do what you need. Certes (talk) 09:39, 29 August 2019 (UTC)- @Certes: Could you please suggest how I can use this bot to specifically update this page and convert the content into flat text? Please note this page runs a template to randomly pick the desired DYK entries. Arman (Talk) 11:08, 29 August 2019 (UTC)
- You could try something like
{{User:JL-Bot/Project content|template = WikiProject Asia|content-mainpage-in-the-news}}
. I've only used it as a DYK experiment; see the history of Portal:Bangladesh/Recognized content for an example of that. News will only appear when the bot runs, which may take several days. An alternative is to use {{Transclude selected recent additions}} in either the portal or a transcluded subpage. Basic usage is{{Transclude selected recent additions|Asia}}
. If you want to exclude Asian etc., filter on|Asia%f[%A]
instead of|Asia
.|not=
can avoid irrelevant similarly named topics like Asia Bibi and Asia (band).
- You could try something like
- @Certes: Could you please suggest how I can use this bot to specifically update this page and convert the content into flat text? Please note this page runs a template to randomly pick the desired DYK entries. Arman (Talk) 11:08, 29 August 2019 (UTC)
Transclude selected recent additions for Asia
|
---|
|
- Hope that helps, Certes (talk) 12:07, 29 August 2019 (UTC)
- @Certes: Thanks for taking time to explain. I am familiar with the use of transclusion templates and I can directly use it on the portal page. But for Asia portal I have to set the template parameter to include ~50 different country names which makes the script very slow. If I directly run two such scripts (one for DYK, one for news) on the portal main page, the page crashes. That's why I was looking for a bot which could run the news code at certain intervals and paste only the "output" as a flat text (no LUA coding) - which I can then link to the portal main page. So, my requirement is slightly different. Arman (Talk) 12:37, 29 August 2019 (UTC)
- Does putting the slow-running templates on a subpage help? In other words, does transcluding the subpage just take its current content quickly or does it repeat the time-consuming work of rebuilding the subpage including calling all its templates? I'm not familiar enough with MediaWiki to be sure. Certes (talk) 12:43, 29 August 2019 (UTC)
- It does not help. When main page calls the subpage template it effectively runs the code again. So, my goal was to put the flat text in a subpage and not the code itself - and I was hoping a bot could help produce the flat text from the code at regular interval than someone to have to do it manually. Arman (Talk) 13:44, 29 August 2019 (UTC)
- What might help here is a general purpose substitution bot. There may already be such a thing but my quick search didn't find one, so this may really be a bot request. The idea is that Portal:Asia/News would contain text like
{{Newbot|/Source}}
and Portal:Asia/News/Source would contain the complex template stuff.{{Newbot}}
is a dummy template producing no text. Newbot should process each page with that template, replacing the rest of the page by the substituted contents of the page named in the parameter (prepending the current page name if the parameter begins with "/"). Then Portal:Asia can transclude Portal:Asia/News quickly: it's just text; the clever template work has already been done. Any comments from the bot regulars please? Certes (talk) 14:46, 29 August 2019 (UTC)- I could make such a bot, I would just have to request toolforge access for the scheduling. It may take some time due to low BAG activity and toolforge requests though. --Trialpears (talk) 15:04, 29 August 2019 (UTC)
- Thanks Trialpears. It sounds like a flexible thing to have, if it's not too powerful to get approval, and fairly simple to write. To steal an idea from JL-Bot, it's safest to wrap the output in brackets such as
<--! Newbot start -->
...<--! Newbot end -->
and only replace that section of the page, in case someone added other text that they want to keep. If it's going to run frequently then the template might need a|frequency=
parameter for pages that don't need to be updated every time it runs. Those brackets also give a place to store the last run time, if that's not getting too complex. Certes (talk) 15:24, 29 August 2019 (UTC)- All of those features sound good and I will make sure they're included. To start with I intend on running it once a day, increasing it if desirable. --Trialpears (talk) 15:40, 29 August 2019 (UTC)
- BRFA filed --Trialpears (talk) 22:39, 30 August 2019 (UTC)
- Thanks Trialpears and Certes. I was out of WP for a couple of days and really impressed to see that you have taken this ahead so far! Great going. Arman (Talk) 04:22, 1 September 2019 (UTC)
- BRFA filed --Trialpears (talk) 22:39, 30 August 2019 (UTC)
- All of those features sound good and I will make sure they're included. To start with I intend on running it once a day, increasing it if desirable. --Trialpears (talk) 15:40, 29 August 2019 (UTC)
- Thanks Trialpears. It sounds like a flexible thing to have, if it's not too powerful to get approval, and fairly simple to write. To steal an idea from JL-Bot, it's safest to wrap the output in brackets such as
- I could make such a bot, I would just have to request toolforge access for the scheduling. It may take some time due to low BAG activity and toolforge requests though. --Trialpears (talk) 15:04, 29 August 2019 (UTC)
- Does putting the slow-running templates on a subpage help? In other words, does transcluding the subpage just take its current content quickly or does it repeat the time-consuming work of rebuilding the subpage including calling all its templates? I'm not familiar enough with MediaWiki to be sure. Certes (talk) 12:43, 29 August 2019 (UTC)
- @Certes: Thanks for taking time to explain. I am familiar with the use of transclusion templates and I can directly use it on the portal page. But for Asia portal I have to set the template parameter to include ~50 different country names which makes the script very slow. If I directly run two such scripts (one for DYK, one for news) on the portal main page, the page crashes. That's why I was looking for a bot which could run the news code at certain intervals and paste only the "output" as a flat text (no LUA coding) - which I can then link to the portal main page. So, my requirement is slightly different. Arman (Talk) 12:37, 29 August 2019 (UTC)
- Hope that helps, Certes (talk) 12:07, 29 August 2019 (UTC)
Automatically Update IUCN Statuses
Good afternoon, Does anyone have a program that can carry out the menial task of updating IUCN statuses? All of these can be retrieved from the IUCN's website, and finding the information is easy, just tedious. Of course this program would have to fetch some external information, such as the current version and the species ID number, but one of these are things that require too much expertise. — Preceding unsigned comment added by AidenD (talk • contribs) 01:46, 19 June 2019 (UTC)
- @AidenD: I looked into this and will say up front it is not easy. There is an IUCN API which is helpful. If we use African elephant as an example, how does one find the IUCN record? The API requires the name field to be set to loxodonta africana but this exact name/string does not exist in the Wikipedia article so that is not a reliable method. The API also accepts an IUCN taxon number (12392 for elephant) and these IUCN numbers appear to be sporadically populated in Enwiki and Wikidata. So the correct way is populate Wikidata with IUCN taxon IDs for the target species (all those with a Wikidata record). Then populate Wikidata with the IUCN status ("NT", etc) and reference URLs. Then create template(s) that are added to Speciesbox taking the IUCN ID as the argument which then display the status and reference pulled from Wikidata. The hardest part is creating a list of IUCN taxon numbers (12392) matched to the appropriate Wikidata number (Q185038). Once that list is available everything else becomes possible. -- GreenC 13:58, 21 July 2019 (UTC)
- "loxodonta africana" is actually African bush elephant (not that I knew that before looking for it). --Gonnym (talk) 17:20, 26 July 2019 (UTC)
- ID matching seems like a job for Mix n Match (Wikidata:Q28054658) which is a tool on Wikidata. I agree that status should be added to Wikidata if anywhere after IDs are matched. In the scenario above, I do not see utility in the template requiring the foreign ID on Wikipedia. Speciesbox would just grab the associated data from Wikidata. A local template might take the Wikidata item and retrieve the species classification, but that would be for the case of a non-infobox invocation, which I do not believe is the primary need here. --Izno (talk) 17:16, 26 July 2019 (UTC)
WikiProject Diptera talk page templates
Hi, I'm helping out the newly-created WP:WikiProject Diptera, who wants to add their template to talk pages without having to do so manually (there are very many flies). Below is a mostly recursive list of categoreis within Category:Flies. I've scanned the categories and removed those related to fly-fishing, but it appears that most are related to taxonomic groupings of flies. The talk page template is here. Let me know if I need to provide additional detail. Thanks, Enwebb (talk) 02:00, 17 September 2019 (UTC)
- Enwebb, do you see the template being added without
|class=
values, or is the intention to import them from other existing WikiProject templates (if applicable)? Primefac (talk) 00:50, 18 September 2019 (UTC) (please do not ping on reply)- Importing existing class is fine, I think they just want some of the gadgets like article alerts and hot articles to get up and running. My second choice would be lacking the class parameter. Enwebb (talk) 02:57, 18 September 2019 (UTC)
- Any updates, Primefac? Sorry for the ping explicitly against your wishes, just wanted to make sure you had seen my response. Regards, Enwebb (talk) 01:12, 25 September 2019 (UTC)
- Enwebb, I think AnomieBOT can do this. Follow the instructions at the top of User talk:AnomieBOT. --Trialpears (talk) 07:21, 25 September 2019 (UTC)
email extractor bot
i want to request a bot that will scour websites and extract email addresses automatically — Preceding unsigned comment added by Shedy360 (talk • contribs) 23:26, 9 October 2019 (UTC)
- What could that possibly be used for? We don't host email addresses on Wikipedia, even for notable/popular individuals. Primefac (talk) 00:14, 10 October 2019 (UTC)
- Spamming. That's the only explanation. --Redrose64 🌹 (talk) 18:34, 10 October 2019 (UTC)
max steel
i want to request that my new bot creates new research articles about clean energy — Preceding unsigned comment added by 162.232.248.237 (talk) 16:39, 28 September 2019 (UTC)
- Not a good task for a bot.. There cannot be that many articles needed to be created, and likely they're in-depth enough that they should be hand-created. Mass-creation should really only be done for clearly-notable subjects all in the same category that would be too tedious and repetitive to do manually. Primefac (talk) 17:35, 28 September 2019 (UTC)
- We don't actually do "research articles" anyway, in terms of scientific or technological research. Still it's a nice idea. All the best: Rich Farmbrough, 20:49, 2 October 2019 (UTC).
replace bad apostrophe
I frequently come across an accent mark used as an apostrophe and it drives me bonkers. The ´ accent does have uses in linguistics articles but it should never be used as an apostrophe. I'm guessing it comes from copy-pasting.
Unfortunately the Wikipedia Search tool does not work to find it and general search on Google also doesn't work (´s site:en.wiki.x.io). Would it be possible to create a bot that is sensitive this character, and eradicates this when it's used as a possessive or contraction? I also see it with an extra space. For example:
- King´s → to King's
- King´ s → to King's
This isn't just a cosmetic thing. The ´ is not a real apostrophe and screenreaders probably won't read it that way. Thanks. —МандичкаYO 😜 22:51, 4 July 2019 (UTC)
- Seems very much of a WP:CONTEXTBOT Headbomb {t · c · p · b} 22:59, 4 July 2019 (UTC)
- The following search will find them: [8] -- WOSlinker (talk) 11:52, 16 July 2019 (UTC)
- Depending on the language and libraries used, it also should be possible to semantically check that the word preceding the ´ is a noun —Wingedserif (talk) 23:38, 21 July 2019 (UTC)
- Doesn't seem like a contextbot issue to me; just find every instance in which a string of letters is followed by the ´ character, which then is followed by a space and an "s" and another space, or followed by an "s" and a space. I've never seen this character used in properly written English in a place where an apostrophe would make sense. Nyttend (talk) 02:16, 13 August 2019 (UTC)
- Nyttend I fixed 600 of these issues yesterday and found a few (about five) that I think was legitimate non-English uses. If creating a typo up to 1% of the time is acceptable error rate I'm not sure. I personally think adding it as a typo fix for AWB would be a better option. --Trialpears (talk) 09:59, 13 August 2019 (UTC)
- @Wikimandia and Trialpears: this has been an AWB typo rule since October 2018, see Wikipedia talk:AutoWikiBrowser/Typos#Move "'s" rule to WP:GENFIXES?. ~ Tom.Reding (talk ⋅dgaf) 14:43, 17 August 2019 (UTC)
- Tracked in T231012. ~ Tom.Reding (talk ⋅dgaf) 14:26, 22 August 2019 (UTC)
- @Wikimandia and Trialpears: this has been an AWB typo rule since October 2018, see Wikipedia talk:AutoWikiBrowser/Typos#Move "'s" rule to WP:GENFIXES?. ~ Tom.Reding (talk ⋅dgaf) 14:43, 17 August 2019 (UTC)
- Nyttend I fixed 600 of these issues yesterday and found a few (about five) that I think was legitimate non-English uses. If creating a typo up to 1% of the time is acceptable error rate I'm not sure. I personally think adding it as a typo fix for AWB would be a better option. --Trialpears (talk) 09:59, 13 August 2019 (UTC)
- Doesn't seem like a contextbot issue to me; just find every instance in which a string of letters is followed by the ´ character, which then is followed by a space and an "s" and another space, or followed by an "s" and a space. I've never seen this character used in properly written English in a place where an apostrophe would make sense. Nyttend (talk) 02:16, 13 August 2019 (UTC)
- Depending on the language and libraries used, it also should be possible to semantically check that the word preceding the ´ is a noun —Wingedserif (talk) 23:38, 21 July 2019 (UTC)
- The following search will find them: [8] -- WOSlinker (talk) 11:52, 16 July 2019 (UTC)
Adminbot to automatically remove permissions for inactivity
I think it's about time a bot removed permissions for inactive administrators and bureaucrats as per WP:INACTIVE and WP:BURACT. The bot would also automatically update the various lists at Wikipedia:Former Administrators and Wikipedia:Bureaucrats#Former bureaucrats. ToThAc (talk) 17:20, 12 September 2019 (UTC)
- Just to clarify, this would need to be a "cratbot", not an adminbot. Also, not sure there is a strong need for this. It only happens like once a month and usually only has single digits. As an example, the most recent desysop for inactivity was on September 1 and had only two accounts. Looking back a few years, it seems to average between two and five accounts, once a month. Seems like an easier task for a crat to do by hand. « Gonzo fan2007 (talk) @ 17:57, 12 September 2019 (UTC)
- I recall seeing this proposal in the past and it was rejected partly because it's not much of a workload and partly because having userrights removed by a machine is a bit jerkish. Jo-Jo Eumerus (talk, contributions) 18:10, 12 September 2019 (UTC)
Tag untagged drafts with {{draft}}
I'm seeing quite a few new editors with questions like these, they start articles in draftspace but can't figure out how to submit them for review because they deleted the line {{subst:AFC submission/draftnew}}<!-- Important, do not remove this line before article has been created. -->
.
We have 10,000 draft articles that don't include the instructions "Click here to submit your draft for review", many of the authors just give up and the draft is then deleted in 6 months.
Could a bot maybe regularly slap {{draft}} or {{subst:AFC draft}} onto these drafts and ping the author in the edit summary, encouraging them to finish? Some experienced editors do keep stuff in draftspace so maybe we'd want to exempt creations by experienced editors.
– Thjarkur (talk) 14:16, 15 September 2019 (UTC)
- Þjarkur, Needs wider discussion.. I agree that this is a frequent problem among new contributors, and telling someone that the draft they thought they submitted 3 months ago wasn't in the queue is never fun. I seem to recall similar proposals facing opposition in the past, so I'd suggest you start a discussion at WT:AFC or one of the Village Pumps before anyone starts work on a task. --AntiCompositeNumber (talk) 16:17, 16 September 2019 (UTC)
NA-Class bird articles with hard-coded NAs
There are too many articles in Category:NA-Class bird articles ( 12 ) that have |class=NA
preventing the articles from being sorted into the appropriate categories. Could a bot go through and change all the instances of |class=NA
/|class=Na
/|class=na
to |class=
(i.e. remove the NA)? Probably could also do the same for |importance=NA
. --Nessie (talk) 19:18, 27 September 2019 (UTC)
- Agreed, and it's best to do both together. --Redrose64 🌹 (talk) 19:20, 27 September 2019 (UTC)
- Four pages where I removed
|importance=NA
(1, 2, 3, 4). Three disambig pages where I removed the banner (1, 2, 3). One page that popped out as needing assessment (1). One archive page where I removed the banner (1). --Izno (talk) 21:22, 27 September 2019 (UTC)- Awesome possum. Thanks for the quick job, Izno! I checked the ones you noted above and made any further adjustments. --Nessie (talk) 01:37, 28 September 2019 (UTC)
AnomieBOT for converting WikiProjects to taskforces of WP:MOLBIO
- Similar to request Wikipedia:Bot_requests/Archive_22#Change_from_WikiProject_Neurology_to_task_force
Hi there. I'm converting WP:MCB, WP:GEN, WP:BIOP, WP:COMBIO, WP:CELLSIG, and WP:WikiProject_RNA into taskforces of a centralised WP:WikiProject Molecular Biology (see this discussion, and page move requests); however, all of the articles under these projects now need to have their talk page banners replaced with a different one that classifies it under a task force of WP:MOLBIO. Is it possible to edit the banners for all the pages under the relevant categories to have their talk page banners replaced from {{WikiProject XYZ|class=|importance=}}
to {{WikiProject Molecular Biology|class=|importance=|XYZ=yes}}
, keeping the classes and importance as the existing class already existing on the talk page and merging into a single template where a page is tagged with the templates of multiple taskforces of WP:WikiProject Molecular Biology?
- Category:WikiProject Molecular and Cellular Biology articles
- Category:WikiProject Genetics articles
- Category:Computational Biology articles by importance
- Category:Computational Biology articles by quality
- Category:WikiProject Biophysics articles
- Category:WikiProject Cell signaling articles
- Category:Gene Wiki articles
Thank you in in advance for any assistance! T.Shafee(Evo&Evo)talk 11:42, 15 June 2019 (UTC)
- Evo, yes, it should be possible. All that is required is to convert each of the "old" banners into wrappers for the "new" banner, and then add
{{subst only|auto=yes}}
to the documentation. Primefac (talk) 11:51, 15 June 2019 (UTC)- @Primefac: Brilliant! Let me know if there's any additional info you'd need from me. There's also a comment here about setting up WP 1.0 bot for a
{{WikiProject Molecular Biology}}
template that I might need some help with. Thanks again, T.Shafee(Evo&Evo)talk 12:49, 16 June 2019 (UTC)- @Primefac: I also just noticed that there should also be a
|Metabolism-pathways=yes
parameter imported over from some articles tagged with the{{WPMCB}}
template. I hope that doesn't complicate the bot function too much. Thanks again! T.Shafee(Evo&Evo)talk 11:19, 7 July 2019 (UTC)- Shouldn't be an issue as long as the code is added before the wrapper is subst. Primefac (talk) 15:42, 7 July 2019 (UTC)
- @Primefac: Sorry to bother, I've been trying to work out how to do that site-wide subst to replace all the
{{WikiProject Biophysics}}
with{{WikiProject Molecular Biology|taskforce=biophysics}}
, but I'm not managing to get the syntax right. See test wrapper template in my sandbox, that I've transcluded into the main sandbox as a test doesn't substitute the template as expected. Could you give an example for one of them that I can reproduce? Does it not require a bot to do it? Thanks! T.Shafee(Evo&Evo)talk 06:14, 1 August 2019 (UTC)- I think you're overthinking this. When I converted {{WikiProject A1 Grand Prix}} into a wrapper for {{WikiProject Motorsport}} I used the following:
<includeonly>{{WikiProject Motorsport |class={{{class|}}} |importance={{{importance|}}} |a1grandprix-taskforce=yes |category={{{category|}}}}}</includeonly><noinclude>This template is deprecated, please use {{t|WikiProject Motorsport}} using {{para|a1grandprix-taskforce|yes}}.</noinclude>
- So if you want to change {{WikiProject Biophysics}} you would use
{{WikiProject Molecular Biology|importance={{{importance|}}}|biophysics=yes}}
. Adding a{{subst only|auto=yes}}
on the template itself would then get it subst'd by the bot. Primefac (talk) 18:49, 1 August 2019 (UTC)- I'm also not sure that MCB is the natural place to host something like biophysics... Headbomb {t · c · p · b} 17:02, 9 August 2019 (UTC)
- @Primefac: I think I see what you're saying, but I would have thought that's lead to duplicate notices on many pages (e.g. Talk:KCNE2). Part of the reason for the merge is that there's an >75% overlap between WP:MCB and WP:GEN tagged pages. When Neurology was merged into WP:MED, it seemed to require use of user:Anomie's bots. @Headbomb: The idea is to merge WP:BIOP into WP:MOLBIO rather than WP:MCB. You can see the longer discussion here. T.Shafee(Evo&Evo)talk 07:00, 16 August 2019 (UTC)
- Ah, didn't realize there was that much duplication. In that case, a bot/AWB run might be better. I did one recently for a template conversion somewhere... on holiday so I don't have time for a BRFA, so you might be better off with someone else. Primefac (talk) 06:16, 17 August 2019 (UTC)
- @Primefac: I think I see what you're saying, but I would have thought that's lead to duplicate notices on many pages (e.g. Talk:KCNE2). Part of the reason for the merge is that there's an >75% overlap between WP:MCB and WP:GEN tagged pages. When Neurology was merged into WP:MED, it seemed to require use of user:Anomie's bots. @Headbomb: The idea is to merge WP:BIOP into WP:MOLBIO rather than WP:MCB. You can see the longer discussion here. T.Shafee(Evo&Evo)talk 07:00, 16 August 2019 (UTC)
- I'm also not sure that MCB is the natural place to host something like biophysics... Headbomb {t · c · p · b} 17:02, 9 August 2019 (UTC)
- @Primefac: Sorry to bother, I've been trying to work out how to do that site-wide subst to replace all the
- Shouldn't be an issue as long as the code is added before the wrapper is subst. Primefac (talk) 15:42, 7 July 2019 (UTC)
- @Primefac: I also just noticed that there should also be a
- @Primefac: Brilliant! Let me know if there's any additional info you'd need from me. There's also a comment here about setting up WP 1.0 bot for a
Bot to clear Category:Journal articles needing infoboxes
If you find {{Infobox journal}} on these pages, could you remove |needs-infobox=
from {{WikiProject Academic Journals}} (and its redirects). Could run nightly/weekly/monthly too. Headbomb {t · c · p · b} 16:11, 21 August 2019 (UTC)
- I wonder, would such a task be generalizable? I.e a bot that removes that parameter from other templates that have it when a page has an infobox? Jo-Jo Eumerus (talk, contributions) 17:59, 21 August 2019 (UTC)
- I've been thinking the same thing and feel like using a check page where WikiProjects can add instructions such as whether they want a one time run or recurrently, if only certain infoboxes are acceptable as with academic journals and so on. I would gladly implement this task. --Trialpears (talk) 18:21, 21 August 2019 (UTC)
- I'm sure there's thing that are generalizable. But there will likely be some exception as well. However, while that's in the cooking pot, the above task is straightforward and won't have issues. Headbomb {t · c · p · b} 20:05, 21 August 2019 (UTC)
- It would be interesting to see how many articles have any type of infobox on them while also having
|needs-infobox=
in any WikiProject banner. Just to get an idea of the extent of the problem, if any. « Gonzo fan2007 (talk) @ 20:43, 21 August 2019 (UTC)- BRFA filed I have filed a BRFA and looked at how many pages this would affect. For Category:Journal articles needing infoboxes roughly a quarter of the pages would be affected. --Trialpears (talk) 22:35, 21 August 2019 (UTC)
- It would be interesting to see how many articles have any type of infobox on them while also having
- I'm sure there's thing that are generalizable. But there will likely be some exception as well. However, while that's in the cooking pot, the above task is straightforward and won't have issues. Headbomb {t · c · p · b} 20:05, 21 August 2019 (UTC)
- I've been thinking the same thing and feel like using a check page where WikiProjects can add instructions such as whether they want a one time run or recurrently, if only certain infoboxes are acceptable as with academic journals and so on. I would gladly implement this task. --Trialpears (talk) 18:21, 21 August 2019 (UTC)
An automated bot that will replace double spaces with single spaces and replace curly quotation/apostrophe marks with straight ones
I'm commonly these encountering these problems in articles, even highly rated ones. Throughthemind (talk) 18:11, 8 October 2019 (UTC)
- Altering double spaces to single has no effect, and so fails WP:COSMETICBOT. Altering curly quotes to straight needs to be done with care, and so would be against WP:CONTEXTBOT. --Redrose64 🌹 (talk) 18:27, 8 October 2019 (UTC)
Stub sorting
When an article has multiple stub templates, say {{X-stub}}
and {{Y-stub}}
and if there exists a template {{X-Y-stub}}
or {{Y-X-stub}}
, a bot should replace the two stub templates with the combined one. SD0001 (talk) 14:18, 17 August 2019 (UTC)
- Example edit. Existing stubs tags should be removed from wherever they are and the new tag should be placed at the very end of the article with 2 blank lines preceding it per WP:STUBSPACING. SD0001 (talk) 14:26, 17 August 2019 (UTC)
SD0001, the number of permutations of name combinations seems huge. Using a brute-force method, the bot would need a list of every category on Wikipedia and then for each article, it would generate every possible permutation of combined names for each category in that article, and check each one against every category name on Wikipedia - sort of like cracking a safe by trying every possible combination. Can you think of a better way to narrow it down? -- GreenC 13:23, 22 August 2019 (UTC)
- You could start with a list of templates of the form Foo-Bar-stub, check whether Foo-stub and Bar-stub both exist, and consider editing articles which use both. Beware of false positives: {{DC-stub}} plus {{Comics-stub}} denotes a comic in Washington, not necessarily a {{DC-Comics-stub}}. Certes (talk) 14:39, 22 August 2019 (UTC)
- @GreenC: There are about 30,300 stub templates on Wikipedia. Store their names as strings in a hash table (not an array) so that we can search whether a given string is in the list in O(1) time (rather than O(n) time). Most programming languages have built-in support for hash tables. Strictly speaking, it's O(length of string) rather than O(1), though the lengths of strings are small enough. We could use advanced data structures like ternary search tries that can be searched even faster. But of course they are very difficult to code and the use would be justified only if we had millions of strings to search from.
- Additionally, there are about 3400 single-word stub templates (eg "castle-stub") for which we'd never be looking for, and hence can be removed from the list. But again this is not necessary as efficiency of search in hash table doesn't depend on the number of items in the table.
- Regarding the generation of permutations: (i) if there are two stub tags, X-stub and Y-stub, there are only two permutations: X-Y-stub and Y-X stub and its really unlikely that both are available so this is an easy case. (ii) if there are 3 stub tags: X-stub, Y-stub and Z-stub, then first check the 6 all-in-one permutations: X-Y-Z, Z-X-Y etc. If not found, search for the 6 two-in-one combinations: X-Y, Y-Z, Y-X etc. If 2 of them match, add both. If 3 of them match (very unlikely) add the page to a list for human review. (iii) if there are 4 or more stub tags: (there shouldn't be that many) ignore it and add the page to a list for human review. SD0001 (talk) 15:31, 22 August 2019 (UTC)
- Ok. Any thoughts on the context problem raised by Certes with {{DC-stub}} + {{Comics-stub}} != {{DC-Comics-stub}}. -- GreenC 16:20, 22 August 2019 (UTC)
- I think this would be really rare given the stringent stub type naming conventions which specifically try to avoid this sort of thing. I can't think of any other such exception even though I have been stub-sorting a lot lately. Regarding the one given, clearly DC-stub and Comics-stub won't be present together on any page. So I don't think this is an issue (unless someone finds more such exceptions). SD0001 (talk) 17:05, 22 August 2019 (UTC)
- The way to go here is not by analysing stub templates, but by looking at their categories to see if they have a common subcategory. For example, an article might have
{{Scotland-stub}}
and{{Railstation-stub}}
- the former categorises to Category:Scotland stubs, the latter to Category:Railway station stubs - but if you go deep enough in the category tree, these have a common subcategory, Category:Scotland railway station stubs for which the stub template is{{Scotland-railstation-stub}}
. --Redrose64 🌹 (talk) 22:17, 22 August 2019 (UTC)- That definitely sounds ideal. But I don't think it is possible because of no one-to-one correspondence b/w stub templates and categories. Example: {{Oman-cricket-bio-stub}} categorises into both Category:Omani sportspeople stubs and Category:Asian cricket biography stubs, both of which have a lot of stuff unrelated to Oman cricket bios. SD0001 (talk) 03:11, 23 August 2019 (UTC)
- That is what we call an "upmerged stub template", and pretty much all of these are dead-ends as far as further specialisation goes. There won't be, for example, any decade-specific templates like
{{Oman-cricket-bio-1970s-stub}}
(compare{{England-cricket-bio-1970s-stub}}
). --Redrose64 🌹 (talk) 23:20, 23 August 2019 (UTC)- I see. That is great. But can you think of a way to find whether two cats have a common subcat, Redrose64? SD0001 (talk) 04:35, 26 August 2019 (UTC)
- That is what we call an "upmerged stub template", and pretty much all of these are dead-ends as far as further specialisation goes. There won't be, for example, any decade-specific templates like
- That definitely sounds ideal. But I don't think it is possible because of no one-to-one correspondence b/w stub templates and categories. Example: {{Oman-cricket-bio-stub}} categorises into both Category:Omani sportspeople stubs and Category:Asian cricket biography stubs, both of which have a lot of stuff unrelated to Oman cricket bios. SD0001 (talk) 03:11, 23 August 2019 (UTC)
- The way to go here is not by analysing stub templates, but by looking at their categories to see if they have a common subcategory. For example, an article might have
- I think this would be really rare given the stringent stub type naming conventions which specifically try to avoid this sort of thing. I can't think of any other such exception even though I have been stub-sorting a lot lately. Regarding the one given, clearly DC-stub and Comics-stub won't be present together on any page. So I don't think this is an issue (unless someone finds more such exceptions). SD0001 (talk) 17:05, 22 August 2019 (UTC)
- Ok. Any thoughts on the context problem raised by Certes with {{DC-stub}} + {{Comics-stub}} != {{DC-Comics-stub}}. -- GreenC 16:20, 22 August 2019 (UTC)
- Is there any indication of how many pages have multiple stub templates? Would it be possible to create a report of the most common combinations and knock those off first? Spike 'em (talk) 20:24, 22 August 2019 (UTC)
- Not all articles with multiple stub templates have the potential for refinement. For example, an article such as Cheltenham High Street Halt railway station might have
{{SouthWestEngland-railstation-stub}}
and{{Gloucestershire-struct-stub}}
, which categorise to Category:South West England railway station stubs and Category:Gloucestershire building and structure stubs respectively - they have no common subcategory, so no further refinement may be preformed by a bot. --Redrose64 🌹 (talk) 22:17, 22 August 2019 (UTC)
- Not all articles with multiple stub templates have the potential for refinement. For example, an article such as Cheltenham High Street Halt railway station might have
- BTW, just discovered that there used to be a bot long back approved for this task. That bot also did resortings based on categorisation and infoboxes (manually triggered by op for each infobox/category type, I think). SD0001 (talk) 03:15, 23 August 2019 (UTC)
- Another complication templates can have multiple names ie. redirects. It might be safe to assume the template's primary name is what should be used but a database of redirect names mapped to primary template names would also be needed. -- GreenC 03:37, 26 August 2019 (UTC)
- Redirects are very uncommon for stub templates. But if they do pop up, I don't think there's a problem whether we use the primary name or redirect name. SD0001 (talk) 14:59, 27 August 2019 (UTC)
- Another complication templates can have multiple names ie. redirects. It might be safe to assume the template's primary name is what should be used but a database of redirect names mapped to primary template names would also be needed. -- GreenC 03:37, 26 August 2019 (UTC)
Operator to take over Legobot Task 33
See discussion at Wikipedia:Bots/Noticeboard#User:Legobot Request. Legoktm is no longer taking feature requests for User:Legobot (just keeping the bot alive), specifically at WP:GAN. Since Legobot runs many important tasks, it would be helpful if a new operator would be willing to take over control and maintenance of the tasks Legobot performed, either as a whole or as a subset of the task (i.e. only WP:GAN tasks). Legoktm mentioned they are happy to hand off the task(s) to another operator. Anyone interested? « Gonzo fan2007 (talk) @ 16:04, 19 August 2019 (UTC)
- Gonzo fan2007, a similar request was made here this past February 6 by Mike Christie: Wikipedia:Bot requests/Archive 77#Take over GAN functions from Legobot, which also has a great deal of information about the work likely involved and a number of the known bugs. Pinging TheSandDoctor and Kees08, who were active in that discussion; the final post was from TheSandDoctor, who had been working on new code and checking the GAN database made available by Legoktm, on June 25. I believe Wugapodes has expressed some interest in further GAN-related coding (they took over the /Reports page last year), though I don't know whether they had this in mind. BlueMoonset (talk) 16:29, 19 August 2019 (UTC)
- Thanks for the background discussion BlueMoonset. « Gonzo fan2007 (talk) @ 16:32, 19 August 2019 (UTC)
- Looking into the existing code, I agree that the best course is probably a port from PHP to a new language so TheSandDoctor's work so far is probably a good starting point. I don't know PHP at all and the database uses SQL which I don't know, so I am probably not a great candidate for taking this task on. I'm willing to help out where I can because this is a big task for anyone, but I'm pretty limited by my lack of knowledge of the languages. Wug·a·po·des 18:43, 19 August 2019 (UTC)
- Occasionally bug reports are posted at User talk:Legobot or User talk:Legoktm concerning user talk page notifications suggesting that a GA nom has failed whereas the reality is that it passed. These seem to be second or subsequent attempts at putting a page through GA after the first failed. Looking at some of the bot source, I find SQL code to create tables, to add rows to those tables - but little to update existing rows and nothing to delete rows that are no longer needed. --Redrose64 🌹 (talk) 22:14, 19 August 2019 (UTC)
- Speaking as an end user (and not a bot operator) and as the person requesting the bot, if User:Legobot failed at WP:GAN, there would be significant disruptions to the project. Its GA work is completely taken for granted. I think that the preference would be for a new bot to take on just the GA tasks (note that Legobot has other active tasks). It would appear based on my review and a look back at past comments that this would include:
- Updating Wikipedia:Good article nominations after {{GAN}} has been added to an article talk page, or if a step in the review process has changed (on-hold, failed, passed, etc)
- Notifying nominators of new status of reviews (begin review, on-hold, failed, passed, etc)
- Adding {{Good article}} to promoted articles
- Update individual topic lists at Wikipedia:Good article nominations/Topic lists
- Updating User:GA bot/Stats
- Adding |oldid= to {{GA}} when missing (Legobot Task 18)
- As previously mentioned, it would also be beneficial to fix some bugs and streamline the process. I'm not sure if it is preferable to go this way, but maybe if a bot owner wants to take this on, that we work on slowly weaning User:Legobot from GA tasks, instead of trying to completely replace it in one shot. As an example, sub-task 3, 5, and 6 are fairly straightforward items (in my limited understanding of coding) and could probably be submitted to WP:BRFA as individual tasks. That way, as individual sub-tasks are brought on-board, we (the end users) could work with the new bot owner to ensure each process is working smoothly. It would be wonderful if a naming structure like User:Good Article Bot (similar to User:FACBot) or something similar could be utilized to specialize the process. Just my input and thoughts on how to go about this. Obviously need an interested party first; I am happy ot assist with manual updating of pages and working through the new process. « Gonzo fan2007 (talk) @ 23:12, 19 August 2019 (UTC)
- Before Legobot took over the tasks by taking over the code base, the bot handling the GAN page was known as GAbot, run by Chris G (who I think got the code base from someone else). I'm not sure how easy it would be to peel off some but not all of the update tasks into a new bot while leaving Legobot with the rest; someone who's looked at the code would have a better idea of how to turn off parts of the GAN code (if it can be) as the new bot is activated piece by piece. The one thing that has been long requested that isn't covered above is the ability to split topics into more subtopics. I didn't see that this was a part of the SQL database—there didn't seem to be a table there for topics and their subtopics—so perhaps if someone can take a dive into the code they can figure out how the bot makes those determinations and therefore what modifications we would need to make. Just a thought. (And believe me, Legobot's GAN work is not taken for granted; we've had a few outages over the years that have been extremely disruptive, but Legoktm has been able to patch things together.) BlueMoonset (talk) 05:19, 20 August 2019 (UTC)
- Perhaps Hawkeye7 would be interested in expanding out FACbot to include GAbot functionality (depending on Sand Doctors progress etc). Kees08 (Talk) 15:31, 20 August 2019 (UTC)
- I had considered it in the past. There are various technical issues here though. Like the others I am not too familiar with PHP or Python (I normally write bot code in Perl and C#) although I do know SQL well. (No deletions is a bad sign; if true it means that the database will continue expanding until we run into limits or performance problems.) The Legobot runs frequently and having another bot performing tasks could result in causing the very problems we have been discussing. Shutting it down is guaranteed to be disruptive and any full replacement is likely to be buggy for a while. (I would personally appreciate a bot updating Wikipedia:Good articles instead of a reviewer having to do it.) Hawkeye7 (discuss) 19:55, 20 August 2019 (UTC)
- BlueMoonset, as long as User:Legobot is {{nobots}} compliant, we could fairly easily exclude Legobot from editing specific pages for tasks 1, 4, and 5 from the list above. Task 6 is also a separate Legobot task, so presumably this is separate coding from other GA-related tasks (and could more easily be usurped by the new bot). We could also develop mirrored pages that would allow the new bot to edit concurrently with Legobot for a certain time until all tasks are running smoothly. « Gonzo fan2007 (talk) @ 20:01, 20 August 2019 (UTC)
- @BlueMoonset, Hawkeye7, TheSandDoctor, Wugapodes, Legoktm, Kees08, and Mike Christie: any additional ideas or information to add? I would be especially interested to hear from TheSandDoctor on their status, if any. « Gonzo fan2007 (talk) @ 21:01, 26 August 2019 (UTC)
- Hello @Gonzo fan2007, BlueMoonset, Hawkeye7, Wugapodes, Legoktm, Kees08, and Mike Christie:. My apologies for my delayed response - I am quite busy "in the real world" at the moment unfortunately. I currently have a GitHub repo relating to this, but haven't been able to dedicate the time required, nor has DatGuy. If someone is willing to assist with this, I would be quite open to the idea of another hand to help out. Most - if not all - of the existing PHP code has been translated/updated to Python, but I have not been able to test it as of yet. It might be ready, but I simultaneously think that it needs further tests of sorts prior to filing a BRFA (thus allowing for testing). --TheSandDoctor Talk 05:25, 28 August 2019 (UTC)
- @BlueMoonset, Hawkeye7, TheSandDoctor, Wugapodes, Legoktm, Kees08, and Mike Christie: any additional ideas or information to add? I would be especially interested to hear from TheSandDoctor on their status, if any. « Gonzo fan2007 (talk) @ 21:01, 26 August 2019 (UTC)
- BlueMoonset, as long as User:Legobot is {{nobots}} compliant, we could fairly easily exclude Legobot from editing specific pages for tasks 1, 4, and 5 from the list above. Task 6 is also a separate Legobot task, so presumably this is separate coding from other GA-related tasks (and could more easily be usurped by the new bot). We could also develop mirrored pages that would allow the new bot to edit concurrently with Legobot for a certain time until all tasks are running smoothly. « Gonzo fan2007 (talk) @ 20:01, 20 August 2019 (UTC)
- I had considered it in the past. There are various technical issues here though. Like the others I am not too familiar with PHP or Python (I normally write bot code in Perl and C#) although I do know SQL well. (No deletions is a bad sign; if true it means that the database will continue expanding until we run into limits or performance problems.) The Legobot runs frequently and having another bot performing tasks could result in causing the very problems we have been discussing. Shutting it down is guaranteed to be disruptive and any full replacement is likely to be buggy for a while. (I would personally appreciate a bot updating Wikipedia:Good articles instead of a reviewer having to do it.) Hawkeye7 (discuss) 19:55, 20 August 2019 (UTC)
- Perhaps Hawkeye7 would be interested in expanding out FACbot to include GAbot functionality (depending on Sand Doctors progress etc). Kees08 (Talk) 15:31, 20 August 2019 (UTC)
- Before Legobot took over the tasks by taking over the code base, the bot handling the GAN page was known as GAbot, run by Chris G (who I think got the code base from someone else). I'm not sure how easy it would be to peel off some but not all of the update tasks into a new bot while leaving Legobot with the rest; someone who's looked at the code would have a better idea of how to turn off parts of the GAN code (if it can be) as the new bot is activated piece by piece. The one thing that has been long requested that isn't covered above is the ability to split topics into more subtopics. I didn't see that this was a part of the SQL database—there didn't seem to be a table there for topics and their subtopics—so perhaps if someone can take a dive into the code they can figure out how the bot makes those determinations and therefore what modifications we would need to make. Just a thought. (And believe me, Legobot's GAN work is not taken for granted; we've had a few outages over the years that have been extremely disruptive, but Legoktm has been able to patch things together.) BlueMoonset (talk) 05:19, 20 August 2019 (UTC)
- Occasionally bug reports are posted at User talk:Legobot or User talk:Legoktm concerning user talk page notifications suggesting that a GA nom has failed whereas the reality is that it passed. These seem to be second or subsequent attempts at putting a page through GA after the first failed. Looking at some of the bot source, I find SQL code to create tables, to add rows to those tables - but little to update existing rows and nothing to delete rows that are no longer needed. --Redrose64 🌹 (talk) 22:14, 19 August 2019 (UTC)
- Looking into the existing code, I agree that the best course is probably a port from PHP to a new language so TheSandDoctor's work so far is probably a good starting point. I don't know PHP at all and the database uses SQL which I don't know, so I am probably not a great candidate for taking this task on. I'm willing to help out where I can because this is a big task for anyone, but I'm pretty limited by my lack of knowledge of the languages. Wug·a·po·des 18:43, 19 August 2019 (UTC)
- Thanks for the background discussion BlueMoonset. « Gonzo fan2007 (talk) @ 16:32, 19 August 2019 (UTC)
@TheSandDoctor: Oh wow this is great! If all we need is to test it some I could probably do that over the next couple weeks. I'll make a pull request if I need to make changes to get it working. Wug·a·po·des 05:43, 28 August 2019 (UTC)
- @TheSandDoctor: thanks for the update! Appreciate the work you have done so far. Let me know if you need any assistanc. « Gonzo fan2007 (talk) @ 16:07, 28 August 2019 (UTC)
Bot to fix gazillions of formatting errors in "Cite news" templates
Recently, apparently due to some change in the way the Template:Cite news works, it is no longer permissible to italicize the publisher name in the "publisher=" parameter. There are therefore now countless articles with error messages in the footnotes saying "Italic or bold markup not allowed in: |publisher=". It would be nice, therefore, if a bot could sweep through and remove the preceding and following '' (and, I guess, ''' or ''''') formatting occurring in these "publisher=" parameters. bd2412 T 01:24, 10 September 2019 (UTC)
- It is this: Category:CS1 errors: markup. 39k pages. -- GreenC 01:51, 10 September 2019 (UTC)
- @BD2412: Wikipedia:Bots/Requests for approval/DannyS712 bot 61 - not just for cite news DannyS712 (talk) 01:56, 10 September 2019 (UTC)
- WP:Bots/Requests for approval/Monkbot 14. --Izno (talk) 02:13, 10 September 2019 (UTC)
- Good, I'm glad someone is doing this. I started doing it manually and calculated it would take about twenty years by hand. bd2412 T 02:14, 10 September 2019 (UTC)
- WP:Bots/Requests for approval/Monkbot 14. --Izno (talk) 02:13, 10 September 2019 (UTC)
List of Wikipedians by article count
Hi. This page has been dormant for two years. Would it be possible to re-activate this list so it is updated daily, much like the edits list? I contacted the bot owner who used to do this some time ago, and they suggested I try here. Nothing fancy with this, same as before with a basic count of number of pages and number of redirects. Thanks. Lugnuts Fire Walk with Me 12:47, 16 September 2019 (UTC)
- Should we update daily or weekly? --Kanashimi (talk) 22:34, 18 September 2019 (UTC)
- @Kanashimi: ideally daily, if possible, but a weekly update would be better than nothing. Thanks. Lugnuts Fire Walk with Me 08:18, 19 September 2019 (UTC)
- Hrm, is it really a (socially) good idea to have such a "ranking" of Wikipedians? Jo-Jo Eumerus (talk, contributions) 08:37, 19 September 2019 (UTC)
- @Jo-Jo Eumerus: - I can't really answer that, but the sister list of users by edits is updated daily. Lugnuts Fire Walk with Me 07:52, 20 September 2019 (UTC)
- Yeah, if memory serves some of the people on that list have been causing problems while pursuing ever increasing edit counts. That's a big part of the reason why I am concerned with the existence of both lists. Jo-Jo Eumerus (talk, contributions) 08:10, 20 September 2019 (UTC)
- @Jo-Jo Eumerus: Some of the people in higher positions use AWB to achieve their high counts. Some of these have done things like add non-existent WikiProject banners to large numbers of talk pages; and have objected when I have asked them to preview their edits, claiming that they "don't have time". They apparently also don't have time to read WP:AWBRULES no. 1. --Redrose64 🌹 (talk) 19:39, 20 September 2019 (UTC)
- Yeah, if memory serves some of the people on that list have been causing problems while pursuing ever increasing edit counts. That's a big part of the reason why I am concerned with the existence of both lists. Jo-Jo Eumerus (talk, contributions) 08:10, 20 September 2019 (UTC)
- @Jo-Jo Eumerus: - I can't really answer that, but the sister list of users by edits is updated daily. Lugnuts Fire Walk with Me 07:52, 20 September 2019 (UTC)
- Hrm, is it really a (socially) good idea to have such a "ranking" of Wikipedians? Jo-Jo Eumerus (talk, contributions) 08:37, 19 September 2019 (UTC)
- @Kanashimi: ideally daily, if possible, but a weekly update would be better than nothing. Thanks. Lugnuts Fire Walk with Me 08:18, 19 September 2019 (UTC)
- What purpose does it serve to update this daily? Leaky caldron (talk) 08:30, 20 September 2019 (UTC)
- @Leaky caldron: Wikipedia:List of Wikipedians by number of edits/1–1000 was updated weekly (on Wednesday mornings European time, late Tuesdays American time) until 25 June 2014. Then it didn't update for several weeks - and when it resumed on 30 July 2014, the update frequency became daily. MZMcBride (talk · contribs) is the person to ask as to why it was changed. --Redrose64 🌹 (talk) 19:18, 20 September 2019 (UTC)
- I'm not interested why it was changed. I am only keen to know what purpose it serves to provide a daily running total of articles created. Leaky caldron (talk) 19:34, 20 September 2019 (UTC)
- As I noted, that's one for MZMcBride - presumably somebody said "please make it daily". --Redrose64 🌹 (talk) 19:38, 20 September 2019 (UTC)
- I'm not interested why it was changed. I am only keen to know what purpose it serves to provide a daily running total of articles created. Leaky caldron (talk) 19:34, 20 September 2019 (UTC)
- @Leaky caldron: Wikipedia:List of Wikipedians by number of edits/1–1000 was updated weekly (on Wednesday mornings European time, late Tuesdays American time) until 25 June 2014. Then it didn't update for several weeks - and when it resumed on 30 July 2014, the update frequency became daily. MZMcBride (talk · contribs) is the person to ask as to why it was changed. --Redrose64 🌹 (talk) 19:18, 20 September 2019 (UTC)
- "Nothing fancy with this, same as before with a basic count of number of pages and number of redirects" -- Is this actually a basic problem eg. a simple API call that responds quickly without using too many resources. And why did the previous bot stop working, was it too hard to keep up and running and too many complications. -- GreenC 14:49, 20 September 2019 (UTC)
- According to the old page it worked thusly: "For every page currently in the article namespace on the English Wikipedia, the author of the earliest revision of that page is queried. This information is aggregated by author name". From recent experience (yesterday), querying every page on enwiki (nearing 6 million) with a low-byte-count API call takes upwards of 15 days to complete. It could be done faster by requesting more than 1 article per query. Still, this is a pretty significant amount and probably shouldn't be done more than once a month or less to be kind on resources, not done at max speed if possible. It could also run on the Toolforge Grid so it doesn't use up bandwidth sending data to a remote location. -- GreenC 15:05, 20 September 2019 (UTC)
- We may cache the result to avoid query again since the creator of article won't change. --Kanashimi (talk) 00:11, 21 September 2019 (UTC)
- That would miss the deletion of older articles and redirects, wouldn't it? bd2412 T 00:28, 21 September 2019 (UTC)
- Perhaps we can cache deleted articles (query deleted articles after lastest operation only) as well. --Kanashimi (talk) 01:49, 21 September 2019 (UTC)
- That would miss the deletion of older articles and redirects, wouldn't it? bd2412 T 00:28, 21 September 2019 (UTC)
- We may cache the result to avoid query again since the creator of article won't change. --Kanashimi (talk) 00:11, 21 September 2019 (UTC)
- According to the old page it worked thusly: "For every page currently in the article namespace on the English Wikipedia, the author of the earliest revision of that page is queried. This information is aggregated by author name". From recent experience (yesterday), querying every page on enwiki (nearing 6 million) with a low-byte-count API call takes upwards of 15 days to complete. It could be done faster by requesting more than 1 article per query. Still, this is a pretty significant amount and probably shouldn't be done more than once a month or less to be kind on resources, not done at max speed if possible. It could also run on the Toolforge Grid so it doesn't use up bandwidth sending data to a remote location. -- GreenC 15:05, 20 September 2019 (UTC)
- BTW my mistake, the creator requests for the full list wouldn't be 15 days but probably only a few hours because it can retrieve up to 5000 per query (the other project I was working on had to be 1 at a time which wouldn't be the case here). -- GreenC 00:56, 21 September 2019 (UTC)
- Thanks for everyone's input and comments. I can't comment on performance/resource issues with compiling the info. All that I know is that it was updated daily, but that ceased two years ago. If daily is not possible, then a weekly or even a monthly update would be better than the current situation. Thanks again. Lugnuts Fire Walk with Me 11:30, 21 September 2019 (UTC)
- I agree with that. bd2412 T 13:23, 21 September 2019 (UTC)
Coding... -- GreenC 14:30, 21 September 2019 (UTC)
- @GreenC: Well... I have already started writing the code... I will stop coding, think you for fast response. However, I think using database will be a good idea than using API. Just a idea. --Kanashimi (talk) 21:51, 21 September 2019 (UTC)
- @Kanashimi: I am about 80% done because a lot of the code is repurposed from other projects, why I am using the API. But if you like to continue coding, please go ahead. If you run into trouble or decide to not pursue it let me know and I will pick it up. There is plenty of other work for me to do on other projects. Also I was not worrying about deleted articles, it seems impractical to track deleted articles from the beginning of Wikipedia. Also so many of the articles started out as short stubs or redirects, then someone else made them into longer articles, it's unclear what the list is really showing. Not sure how to address it. -- GreenC 22:21, 21 September 2019 (UTC)
- Thanks a lot for your comments. I also have other coding works and will not finish this task soon. I think you will fast than me. About the problem of article quality and author contribution, we may count the words of article. But this will significantly increase the burden of query, and not a absolute standard. --Kanashimi (talk) 22:45, 21 September 2019 (UTC)
- @Kanashimi: I am about 80% done because a lot of the code is repurposed from other projects, why I am using the API. But if you like to continue coding, please go ahead. If you run into trouble or decide to not pursue it let me know and I will pick it up. There is plenty of other work for me to do on other projects. Also I was not worrying about deleted articles, it seems impractical to track deleted articles from the beginning of Wikipedia. Also so many of the articles started out as short stubs or redirects, then someone else made them into longer articles, it's unclear what the list is really showing. Not sure how to address it. -- GreenC 22:21, 21 September 2019 (UTC)
Oh yeah, people used to get upset that the article count report included redirects and bots in the rankings. Good times. --MZMcBride (talk) 01:10, 22 September 2019 (UTC)
@Kanashimi and Lugnuts: The program (pgcount ie. page count) is running. It will take a while (week?) because it is the first run building a cache. And I thought it might go faster and retrieve 5000 (or 500) at a time, but the API doesn't support multiple articles for revision information, only 1 article per request. Will post again when it starts posting the tables. Future cached-based runs should go much faster. Have not decided how often it will run, depending how long the cache runs take. It might actually run faster and use less resources to run daily since the diffs are less and the cache hits higher, will see. -- GreenC 17:02, 22 September 2019 (UTC)
- Superb - thanks for your work and the update. Lugnuts Fire Walk with Me 17:07, 22 September 2019 (UTC)
- Good. If I have time, I may try a database version. I guess it may be faster than API version and even do not need cache. --Kanashimi (talk) 22:40, 22 September 2019 (UTC)
- @Kanashimi: Being discussed at WP:SQLREQ. -- GreenC 20:59, 23 September 2019 (UTC)
- @GreenC: - Do you have a progress update? Thanks. Lugnuts Fire Walk with Me 19:24, 29 September 2019 (UTC)
- @Lugnuts: it finished creating the cache today, took about 7 days. It will rebuild the cache every so often to account for username renames. Normal runs will finish quicker, but it won't run every 24hrs. It will not track redirects for now for a couple of reasons. -- GreenC 21:14, 29 September 2019 (UTC)
- Excellent work - thank you! Lugnuts Fire Walk with Me 06:54, 30 September 2019 (UTC)
- @GreenC: - Do you have a progress update? Thanks. Lugnuts Fire Walk with Me 19:24, 29 September 2019 (UTC)
- @Kanashimi: Being discussed at WP:SQLREQ. -- GreenC 20:59, 23 September 2019 (UTC)
Done -- GreenC 02:02, 22 October 2019 (UTC)
Pages using deprecated image syntax
Hello. I was wondering if there was a bot that could go through the backlog of Category:Pages using deprecated image syntax. I have zero bot experience, and I would rather leave it to the experts :) I don't think this falls under any of the commonly requested bots either. Thanks! --MrLinkinPark333 (talk) 17:29, 27 July 2019 (UTC)
- There's roughly three types of pages in this category:
- Images used without any additional metadata, like
|image=[[File:Example.png|thumb]]
. These can be easily fixed by removing the excess markup. - Images used with additional information, like
|image=[[File:Example.png|thumb|175px|Logo for Example]]
. This additional markup should not be removed automatically, but should instead be moved to the appropriate parameters in the template. - Pages where markup is used to do something more complicated, like displaying two images side-by-side. As far as I can tell, there isn't really anything to fix here.
- Images used without any additional metadata, like
- The other problem is that the various infobox templates are not always consistent with their parameters. Module:Infobox supports upright scaling, but not all infoboxes have been updated with the correct parameter. Some infoboxes have multiple different image fields (image, logo, seal, flag), while others alias them together into one. The Type 1 pages could be fixed pretty easily, but the Type 2 ones may have more issues requiring testing and human review. --AntiCompositeNumber (talk) 18:44, 27 July 2019 (UTC)
- @AntiCompositeNumber: Hmm. That's complex. Would it be useful to go through #1 only with a bot then #2 done manually? Also, why would #3 be there in the category if nothing needs to be fixed? --MrLinkinPark333 (talk) 19:25, 27 July 2019 (UTC)
- @MrLinkinPark333: Currently, Module:InfoboxImage basically says "If the image paramater starts with [[ and it isn't a thumbnail, then it's deprecated image syntax." (Thumbnails get put in their own category). As a first step, it might be a good idea to have the module categorize image parameters with multiple files differently. AntiCompositeNumber (talk) 21:19, 27 July 2019 (UTC)
- @AntiCompositeNumber: Hmm. That's complex. Would it be useful to go through #1 only with a bot then #2 done manually? Also, why would #3 be there in the category if nothing needs to be fixed? --MrLinkinPark333 (talk) 19:25, 27 July 2019 (UTC)
- Is this just a cosmetic change? If it is adding it as a default AWB fix may be a better way to fix this as that would couple it with more substantial edits. --Trialpears (talk) 19:50, 27 July 2019 (UTC)
- @Trialpears: I'm just more interested in finding a way to take a chunk out of the backlog. If AWB is more suitable, feel free to point me that way :) --MrLinkinPark333 (talk) 20:12, 27 July 2019 (UTC)
- MrLinkinPark333 I've been looking a bit and it seems all standard AWB fixes are only made outside of templates, so it can't just be added to Wikipedia:AutoWikiBrowser/Typos wouldn't work. I suggest asking at WT:AWB or the AWB phabricator. If the edit isn't considered cosmetic I would happily make the bot. Pinging @WOSlinker: since they created the module and can probably answer the cosmetic edit question. --Trialpears (talk) 23:03, 12 August 2019 (UTC)
- I created the module and added the tracking category but the category added at the request of User:Zackmann08, see Module talk:InfoboxImage/Archive 1#Pages_using_deprecated_image_syntax -- WOSlinker (talk) 11:57, 13 August 2019 (UTC)
- MrLinkinPark333 I've been looking a bit and it seems all standard AWB fixes are only made outside of templates, so it can't just be added to Wikipedia:AutoWikiBrowser/Typos wouldn't work. I suggest asking at WT:AWB or the AWB phabricator. If the edit isn't considered cosmetic I would happily make the bot. Pinging @WOSlinker: since they created the module and can probably answer the cosmetic edit question. --Trialpears (talk) 23:03, 12 August 2019 (UTC)
- @Trialpears: I'm just more interested in finding a way to take a chunk out of the backlog. If AWB is more suitable, feel free to point me that way :) --MrLinkinPark333 (talk) 20:12, 27 July 2019 (UTC)
- I had a brief look at this. There is a problem with
{{Infobox election}}
which uses two or more side-by-side images. They are generally set with size of 160x160 or 150x150. This will give the images the same height (as they are portraits) and the two columns will be, potentially different widths. This works reasonably well, though identical aspect ratios would be better. Unfortunately using the recommended syntax we have the upright scaling factor, which only scales width. It's trivial to calculate this factor if constant width was what we required, e.g. 160/200 = 8/11 = .727272... But the factor we actually want will require recovering the aspect ratios from the file pages. Still not hard, though not within the scope of AWB-like tools. Unfortunately this is not very user friendly: if someone changes an image, or uses our page as a cut-and-paste basis for something new, they will need to know how to calculate the necessary number. All the best: Rich Farmbrough, 02:00, 3 October 2019 (UTC). - Alright, this has sat for a while with no further discussion, so I'm going to say that this is Not a good task for a bot. and will archive the thread soon. --AntiCompositeNumber (talk) 00:16, 25 October 2019 (UTC)
Replacement for User:UTRSBot
UTRSBot has been down for some time, and its maintainer is inactive. A discussion at WP:BOTN seemed to indicate the code was hosted at github and it maybe wouldn't be that hard to replace it. I think this is very important as the bot provided a level of transparency for the UTRS process that is now entirely absent. Beeblebrox (talk) 21:27, 4 October 2019 (UTC)
- Beeblebrox, I've got some ideas on this - I'm hoping to start work on it tomorrow. SQLQuery me! 02:40, 5 October 2019 (UTC)
- Awesome! Beeblebrox (talk) 17:07, 5 October 2019 (UTC)
- TParis responded, btw. --Izno (talk) 17:29, 5 October 2019 (UTC)
Replace break tags with templates to increase accessibility
Break tags should be avoided to create lists (Wikipedia:Manual_of_Style/Accessibility#Vertical_lists). There are many alternatives, I personally use Template:Unbulleted list.
I want to make the first request simple by limiting it to unbulleted lists in infoboxes.
Example edits: One, Two, Three
Specifications
- When an infobox field contains two or more unbulleted items separated with <br> or <br />
- Perform an edit similar to the example edits listed.
- List must contain less than 30 items (Wikipedia:Manual_of_Style/Lists#Unbulleted_lists)
- Standard template limitations (replacing | with {{!}}, for example, per Wikipedia:Manual_of_Style/Accessibility#Unbulleted_vertical_lists)
I can provide any more information as necessary. Let me know if I am missing something glaring, but it seems like a good bot task. Kees08 (Talk) 05:13, 23 October 2019 (UTC)
- In principle this is good for accessibility, as noted. However, the bot would need to be able to distinguish the cases where a list is definitely intended from those where the
<br />
is there merely as an aesthetic convenience, perhaps to reduce the width of the infobox - these should not be marked up as lists. --Redrose64 🌹 (talk) 08:44, 23 October 2019 (UTC)- That's a good point, and I do not know a way around it. Might be better as a AWB task. Kees08 (Talk) 15:47, 24 October 2019 (UTC)
- The latter should be replaced by a simple space. ―cobaltcigs 18:58, 24 October 2019 (UTC)
- How common is the second case? There are a very large numbers of infoboxes, it is probably beyond semi-manual AWB. -- GreenC 23:30, 26 October 2019 (UTC)
- It usually uses the small html tags as well, so that would help delineate the cases. I personally do not have a good estimate. Kees08 (Talk) 23:35, 26 October 2019 (UTC)
- How common is the second case? There are a very large numbers of infoboxes, it is probably beyond semi-manual AWB. -- GreenC 23:30, 26 October 2019 (UTC)
- So I have a script that does edits like this (one of but several things happening here). It doesn't actually check for an infobox template name, but it does require each list to be preceded by
| foo =
(wherefoo
is actually a regex representing an ever-changing whitelist of parameter names). It uses the short form{{ubl|Foo|Bar|Baz}}
if (a) the combined string length of the list is less than a certain number intended to estimate what's short enough to fit on a single line of the edit box for the average user (whose screen I've estimated to be about 20% wider than that of my laptop), and (b) no list item contains any refs or additional templates. Otherwise it defaults to using{{plainlist|\n* Foo\n* Bar\n* Baz\n}}
. It's probably not suitable for fully automatic use. ―cobaltcigs 21:44, 24 October 2019 (UTC)- So how should | orbit_reference = [[Barycentric coordinates (astronomy)|Barycentric]]<br/><small>(Earth-Moon system)</small> look (seen in Luna 4). Is that an appropriate use of the breaking space? Kees08 (Talk) 17:11, 26 October 2019 (UTC)
- That example is not a list: it is a term followed by a clarifying parenthesis, it therefore must not be marked up as a list. Whether a space or a
<br />
tag (which is not a breaking space, it is a line break) is used does not affect the fact that no list is intended. --Redrose64 🌹 (talk) 22:16, 26 October 2019 (UTC)
- That example is not a list: it is a term followed by a clarifying parenthesis, it therefore must not be marked up as a list. Whether a space or a
- So how should | orbit_reference = [[Barycentric coordinates (astronomy)|Barycentric]]<br/><small>(Earth-Moon system)</small> look (seen in Luna 4). Is that an appropriate use of the breaking space? Kees08 (Talk) 17:11, 26 October 2019 (UTC)
- Not a good task for a bot. mostly on account of the CONTEXT issues. Primefac (talk) 23:34, 26 October 2019 (UTC)
Room 101 update links
Can someone update the incoming links to Room 101 (British TV series) from Room 101 (TV series) (edit | talk | history | protect | delete | links | watch | logs | views), the currently ambiguous "(TV series)" needs to be retargeted to the disambiguation page Room 101 (disambiguation), where multiple TV shows are listed. -- 67.70.33.184 (talk) 09:32, 29 October 2019 (UTC)
- Done and redirect retargeted. Certes (talk) 11:20, 29 October 2019 (UTC)
- Except, of course, for these. --Redrose64 🌹 (talk) 19:01, 29 October 2019 (UTC)
- Thanks -- 67.70.33.184 (talk) 07:05, 30 October 2019 (UTC)
Request for bot/script to remove autocat= parameter from Template:Certification Table Entry
There are nearly 6,000 articles in Category:Pages using certification Table Entry with unknown parameters (0) that are caused by the presence of the |autocat=
parameter. This parameter was used to assign categories automatically, but I removed the relevant code in 2016 after the categories were deleted. See this discussion for links to information about the deleted categories.
This request is for a bot to remove |autocat=
and its values (all values are invalid, since the parameter is no longer used) from all articles in the above tracking category. It should be doable by someone reasonably skilled with regexes, AWB, or both. – Jonesey95 (talk) 18:33, 29 October 2019 (UTC)
- Jonesey95, this can be handled by PrimeBOT operated by Primefac. --Trialpears (talk) 18:53, 29 October 2019 (UTC)
- Sure, I can get to this in the next few days. Jonesey95, do you know if there are any other "major" parameter(s) that might be suitable for removal? I'd hate to swing through 6k entries removing a single parameter only to find out I should have been removing a second or third along the way to actually drop the cat numbers. Primefac (talk) 01:02, 30 October 2019 (UTC)
- Thanks for asking. Please change
|ceryear=
to|certyear=
as well, and change|song=
to|title=
. Overall, there are only 191 of 6,046 pages that are listed outside of the "A" (for autocat) grouping, and by the way they are alphabetized, it looks like there will be about 150 left after the bot run. Those remaining articles are scattered with various typos and will be easier to fix by hand. – Jonesey95 (talk) 02:54, 30 October 2019 (UTC)- I don't doubt that they're all containing
|autocat=
, but the downside of alphabetical category sorting is that if it triggers for "autocat" any other bad params will be somewhat hidden until autocat is removed. I'll add autocat to the param check temporarily just to get an idea of how many pages will be left after it's removed. Primefac (talk) 12:10, 30 October 2019 (UTC) - For example, I went through the first half-dozen in the As, and they all list
|access-date=
as an issue. Primefac (talk) 12:13, 30 October 2019 (UTC)
- I don't doubt that they're all containing
- Thanks for asking. Please change
- Sure, I can get to this in the next few days. Jonesey95, do you know if there are any other "major" parameter(s) that might be suitable for removal? I'd hate to swing through 6k entries removing a single parameter only to find out I should have been removing a second or third along the way to actually drop the cat numbers. Primefac (talk) 01:02, 30 October 2019 (UTC)
Params to remove/change
- access-date → accessdate
- album → title
- aritst → artist
- autocat
- certyesr → certyear
- certyer → certyear
- ceryear → certyear
- publisher
- song → title
- Fair enough. I'm used to removing these parameters manually, so I just fix everything I find. I'm familiar with the limitation you describe. – Jonesey95 (talk) 12:53, 30 October 2019 (UTC)
- Apologies, I did not mean to imply you did not know how the param check works, I just was clarifying that it might not just be autocat. That being said, removing it from the check has already dropped the numbers down to below 1500, so even if it were just that I'd say it's worth a run. I'll give it another hour or two and see if I can pull up some more repetitive params. Primefac (talk) 13:04, 30 October 2019 (UTC)
- No problem. I found
|aritst=
→|artist=
and|album=
→|title=
. Those should go on the list. Also|certyesr=
and|certyer=
can go to|certyear=
, and|publisher=
can be removed entirely (I have found no evidence that it ever existed in the template or the documentation). – Jonesey95 (talk) 14:09, 30 October 2019 (UTC)
- No problem. I found
- Apologies, I did not mean to imply you did not know how the param check works, I just was clarifying that it might not just be autocat. That being said, removing it from the check has already dropped the numbers down to below 1500, so even if it were just that I'd say it's worth a run. I'll give it another hour or two and see if I can pull up some more repetitive params. Primefac (talk) 13:04, 30 October 2019 (UTC)
- Fair enough. I'm used to removing these parameters manually, so I just fix everything I find. I'm familiar with the limitation you describe. – Jonesey95 (talk) 12:53, 30 October 2019 (UTC)
Removing nounspecified= from Template:Certification Table Bottom
Pinging Muhandes for feedback on whether this bot task should also remove |nounspecified=
and its values from {{Certification Table Bottom}}. I haven't looked hard for a discussion about why this parameter was removed, but it looks like it has been gone from the template for three years, and there may be considerable overlap between the above edits and edits that remove |nounspecified=
from articles in Category:Pages using certification Table Bottom with unknown parameters (0). – Jonesey95 (talk) 16:37, 30 October 2019 (UTC)
- Thanks for pinging me. Yes,
|nounspecified=
should be removed from {{Certification Table Bottom}} usage, it is obsolete. I am working on automating the other parameters of the bottom template, but this is an issue for another request. --Muhandes (talk) 16:42, 30 October 2019 (UTC)- Well, if {{Certification Table Entry}} and {{Certification Table Bottom}} are regularly found on the same page, I might as well hit both of them at the same time to avoid editing a page twice. As with CTE, are there other params outside of
|nounspecified=
that could use to be removed (i.e. with >20 uses)? Primefac (talk) 17:13, 30 October 2019 (UTC)- They are always found on the same page. I cleaned the category manually before so I can't think of any other obsolete parameter. --Muhandes (talk) 22:06, 30 October 2019 (UTC)
- Well, if {{Certification Table Entry}} and {{Certification Table Bottom}} are regularly found on the same page, I might as well hit both of them at the same time to avoid editing a page twice. As with CTE, are there other params outside of
All of the above are Done; first cat is under 400 pages (mostly misspellings of the various params) and the second is down to 12 pages. Primefac (talk) 14:03, 3 November 2019 (UTC)
- And my ping failed, so @Muhandes and Jonesey95: here you go. Primefac (talk) 14:04, 3 November 2019 (UTC)
- Thanks, Primefac. I am working on the remaining errors. WP editors sure are creative. – Jonesey95 (talk) 14:25, 3 November 2019 (UTC)
- Primefac, thanks. --Muhandes (talk) 19:03, 4 November 2019 (UTC)
WikiProject Organized Labour
The Organized Labour Project was created back in 2006. A talk page banner --Goldsztajn (talk) 23:04, 23 November 2019 (UTC)
{{LabourProject}}
was created at the same time, subsequently in 2010 the banner {{WikiProject Organized Labour}}
was created with LabourProject redirecting to Organised Labour. At present {{LabourProject}}
appears on 2,978 of the 7,191 pages tagged in the Organized Labour Project. Of those 2,978, something over 2,000 are articles. I've been manually changing some as I come across them, but it seems that it would be possible that a bot could be deployed to rename the talk page articles where {{LabourProject}}
appears, and replace with {{WikiProject Organized Labour}}
while not changing any of the importance or class assessments ...yes? (I'm assuming problems like this have come up before, but I've had no luck, and little experience, in searching on this kind of problem). Thanks.
Doing a little more searching... perhaps this can be done in AWB? advice appreciated.--Goldsztajn (talk) 23:15, 23 November 2019 (UTC)- Withdrawing the request, will request AWB tools.--Goldsztajn (talk) 23:31, 23 November 2019 (UTC)
- @Goldsztajn: There is no need, because one is a redirect to the other, so they are equivalent - WP:NOTBROKEN applies, and if you use AWB, then beware of WP:AWBRULES item 4. --Redrose64 🌹 (talk) 12:55, 24 November 2019 (UTC)
- @Redrose64: Hi and thanks...I was aware of WP:NOTBROKEN, I just had the (mistaken?) impression #6 of WP:BRINT was applicable here. The comment under template redirects on reasons not to use a redirect possibly seems relevant, too, in this case. --Goldsztajn (talk) 13:16, 24 November 2019 (UTC)
- The BRINT one is mainly about template subpages (such as doc pages) transcluded to other templates; this will mainly be to their parent templates, but may be to other templates if two or more templates share a doc page. Your last link is a dab page. --Redrose64 🌹 (talk) 19:26, 24 November 2019 (UTC)
- @Redrose64: Hi and thanks...I was aware of WP:NOTBROKEN, I just had the (mistaken?) impression #6 of WP:BRINT was applicable here. The comment under template redirects on reasons not to use a redirect possibly seems relevant, too, in this case. --Goldsztajn (talk) 13:16, 24 November 2019 (UTC)
- @Goldsztajn: There is no need, because one is a redirect to the other, so they are equivalent - WP:NOTBROKEN applies, and if you use AWB, then beware of WP:AWBRULES item 4. --Redrose64 🌹 (talk) 12:55, 24 November 2019 (UTC)
- Withdrawing the request, will request AWB tools.--Goldsztajn (talk) 23:31, 23 November 2019 (UTC)
A bot to add the needed comma after month and date in certain date formats
In the date format with the "month, day, year" style, a comma should almost always follow the year if the sentence does not terminate right after. I have made comma-specific edits like this for years, but the problem persists. Making edits to include this comma changes takes up a lot of time, and it could be something a bot does in a second. Paper Luigi T • C 03:30, 9 November 2019 (UTC)
- I think this would fall afoul of CONTEXTBOT, since "almost always" generally means "the only instances will be the exceptions" (according to cynicism and Sod's Law). Primefac (talk) 12:19, 9 November 2019 (UTC)
- I used the phrase "almost always" because I don't know if exceptions to the rule exist. Are there any exceptions? Paper Luigi T • C 03:17, 10 November 2019 (UTC)
- Sure. Dates listed in tables. Dates in bulleted lists. Dates within template parameters. Ambiguous statements like "On July 4, 2008 people attended the baseball game." Many more. – Jonesey95 (talk) 15:22, 10 November 2019 (UTC)
- Thank you! I think the way to code this for all but the last exception you listed would be to exclude instances when either "{" or "|" is the first non-space or non-line-break character after the year. Ambiguous statements should be rewritten for clarity. Paper Luigi T • C 03:29, 11 November 2019 (UTC)
- Bot coding has to be clear and unambiguous. I quickly and easily made up a few examples off the top of my head that would confuse your proposed bot, and there are no doubt many more. Please read WP:CONTEXTBOT. Since fixing these errors will require human supervision, you can create lists of potential articles to fix on your own. Using "insource" searches will probably be effective. – Jonesey95 (talk) 07:12, 11 November 2019 (UTC)
- Thank you! I think the way to code this for all but the last exception you listed would be to exclude instances when either "{" or "|" is the first non-space or non-line-break character after the year. Ambiguous statements should be rewritten for clarity. Paper Luigi T • C 03:29, 11 November 2019 (UTC)
- Sure. Dates listed in tables. Dates in bulleted lists. Dates within template parameters. Ambiguous statements like "On July 4, 2008 people attended the baseball game." Many more. – Jonesey95 (talk) 15:22, 10 November 2019 (UTC)
- I used the phrase "almost always" because I don't know if exceptions to the rule exist. Are there any exceptions? Paper Luigi T • C 03:17, 10 November 2019 (UTC)
A bot to remove redundant infobox "title" params
I don't know if this has been suggested before, but I'm suggesting it anyways. I'd like to see a bot that deletes the "title" param (and its accompanying value) from the infoboxes of articles in which the "title" is the same as the article's name. Because of the way infoboxes work, the article title is the default value of this field if it is not already specified. There's no reason for redundancy here, so I'm suggesting a bot that could solve this problem. Paper Luigi T • C 03:26, 9 November 2019 (UTC)
- There's no real reason to remove the parameter, and in many cases there are reasons for the redundancy (e.g. if the article is moved or merged). Headbomb {t · c · p · b} 05:02, 9 November 2019 (UTC)
- Agreed with Headbomb. This is a solution looking for a problem. There are often times when the
|title=
param is used in the infobox to trigger other effects or links. Unless there's a good/practical/technical reason for removing the title parameter, I think this is just busy-work. Primefac (talk) 12:21, 9 November 2019 (UTC)- For the uninitiated, could you explain those times when that param triggers other effects/links? Paper Luigi T • C 03:16, 10 November 2019 (UTC)
- Not a good task for a bot. as currently proposed.
|title=
is often used to mean something other than the page name. Common examples include {{Infobox person}} (where|title=
is used in 6,400 articles), {{Infobox officeholder}} (where|title=
is used in 3,500 articles), and many more. Here is a search for title= in infoboxes, which currently yields 200 infobox templates. The|title=
parameter is equivalent to the page name in very few of them. If there is a specific infobox for which the proposal makes sense, please specify it and explain your reasoning. – Jonesey95 (talk) 05:04, 10 November 2019 (UTC)- Just to answer your question, Paper Luigi, {{Infobox time zone}} uses the name/title to directly affect switch statements and templates like {{time}}. Primefac (talk) 14:25, 10 November 2019 (UTC)
- Thank you! Paper Luigi T • C 03:24, 11 November 2019 (UTC)
- I guess my suggestion was a little too broad. Specifically, pages containing Template:Infobox video game would make good targets for this proposal. My reasoning is that, when
|title=
is left blank, the page automatically displays the infobox with its value filled in by PAGENAME. When the value for|title=
is character-for-character the same as PAGENAME, it becomes redundant. Template:Infobox musical artist, Template:Infobox television, and Template:Infobox film, to name a few, follow the same logic. Paper Luigi T • C 03:24, 11 November 2019 (UTC)- I don't know exactly how {{Infobox video game}} works, but I don't think it works exactly as you think it does. Take Super Mario RPG, for example. It has a
|title=
value that is different from the page name, but if you remove|title=
from the infobox, the long title still appears in the infobox, not matching the shorter article title. I believe that Wikidata is involved. If you remove the|title=
value but leave the parameter itself in place and empty, the infobox displays no title. I believe that is a bug, but you could have tested it yourself before making the above claims.
- I don't know exactly how {{Infobox video game}} works, but I don't think it works exactly as you think it does. Take Super Mario RPG, for example. It has a
-
- {{Infobox musical artist}} does not appear to use
|title=
, and|name=
in that template does not work as you describe. Rather than continue this game of Whac-A-Mole, I will leave further research as an exercise for the reader. – Jonesey95 (talk) 07:05, 11 November 2019 (UTC)- Thank you for pointing that out because I wasn't aware of situations like Super Mario RPG's in the past. It didn't occur to me to think of the connection to Wikidata, which I'm not very familiar with. If this suggestion were to go forward, even if just for the video game infobox template, I don't see the harm in removing
|title=
on Super Mario RPG or similar pages with corresponding Wikidata listings. As you said, the removal of|title=
would still show the long title in the infobox. In fact, removing|title=
even produces the Template:nowrap for the subtitle that corresponds to the current value for|title=
on that article. Removing|title=
from Super Mario RPG changes nothing as far as I can see, which means it's a redundancy. Paper Luigi T • C 14:41, 13 November 2019 (UTC)
- Thank you for pointing that out because I wasn't aware of situations like Super Mario RPG's in the past. It didn't occur to me to think of the connection to Wikidata, which I'm not very familiar with. If this suggestion were to go forward, even if just for the video game infobox template, I don't see the harm in removing
- {{Infobox musical artist}} does not appear to use
- Just to answer your question, Paper Luigi, {{Infobox time zone}} uses the name/title to directly affect switch statements and templates like {{time}}. Primefac (talk) 14:25, 10 November 2019 (UTC)
- For the uninitiated, could you explain those times when that param triggers other effects/links? Paper Luigi T • C 03:16, 10 November 2019 (UTC)
- Agreed with Headbomb. This is a solution looking for a problem. There are often times when the
Remove Template:Expert needed when placed without reason
- Needs wider discussion.
From {{Expert needed}} documentation: Add |talk=, |reason=, or both. Uses of this template with neither may be removed without further consideration.
These maintenance tags appear to have been fairly widely bombed onto articles without adequate explanation. Would be fine if it only attended to tags older than 3 months.
Also remove multiple issues if only one remaining maintenance template. –xenotalk 18:03, 30 September 2019 (UTC)
- Where is the consensus/discussion about mass removal? Headbomb {t · c · p · b} 18:37, 30 September 2019 (UTC)
- Could you suggest a good venue for that? WT:Maintenance maybe? –xenotalk 18:52, 30 September 2019 (UTC)
- Anywhere that's well advertised. An RFC at WT:Maintenance would work. Cross-posted to WP:VPR and Template talk:Expert needed if it's there. But an RFC directly at Template talk:Expert needed with notices at the other places seem more natural. Headbomb {t · c · p · b} 19:03, 30 September 2019 (UTC)
- Yes, that works. Will be back, thank you for the suggestion. –xenotalk 19:29, 30 September 2019 (UTC)
- Anywhere that's well advertised. An RFC at WT:Maintenance would work. Cross-posted to WP:VPR and Template talk:Expert needed if it's there. But an RFC directly at Template talk:Expert needed with notices at the other places seem more natural. Headbomb {t · c · p · b} 19:03, 30 September 2019 (UTC)
- Could you suggest a good venue for that? WT:Maintenance maybe? –xenotalk 18:52, 30 September 2019 (UTC)
User:xeno: there are less than 5,000 instances of {{Expert needed}}. I ran a quick script and the majority have no |reason=
or |talk=
. And those that do are mostly of little value, for example
- "Article needs help"
- "horribly disjointed"
- "Expert needed on early 20th century Russian theater"
- "Expert on science needed"
- "Needs expansion and more details"
- "Ambiguous and rambling"
- "There needs to be more information about this person"
I don't think removals are a good idea. It would wipe out most of them, and those that remain are mostly like the above of little value. Whatever the docs say, in practice the reason/talk field is optional and casual. -- GreenC 01:55, 22 October 2019 (UTC)
- @Xeno: again in case the split-line post doesn't trigger a notification. -- GreenC 02:01, 22 October 2019 (UTC)
- GreenC: Thanks for the stats. In light of this, I would argue they’re almost all of little value then but perhaps this should be taken up at TfD. Are you able to provide any insight into whether anyone is actually addressing these tags? What is the average age of the tag, for example? –xenotalk
Using a sample of 3500 cases:
Count of
{{expert needed}} by year |
---|
|
This is a little off because this example shows a date of February 2009, when a bot added a date, but the template itself was added in January 2006 - it took 3 years for a bot to find and add a date. The template was created in 2006 so the missing entries for 2006-2007 are probably dated in 2008 and 2009 when the date bots ran. Most of them have dates right now so the bots seem to be keeping up more recently (80 out of 3500 are missing dates). There is no [easy] way to count how many were resolved/removed, it would require downloading full dumps every month going back to 2006 and grepping through them, a major project and resources. @Xeno: -- GreenC 16:18, 22 October 2019 (UTC)
- The template did not support the date parameter before February 2009. All the best: Rich Farmbrough, 22:01, 25 October 2019 (UTC).
- That explains it. The mystery of why there are entries for 2007/2008, there was a different template merged into this one that did support the date para.[9]. -- GreenC 23:13, 25 October 2019 (UTC)
- There were also a number of templates where I was proactively adding a date to instances as I came across them, before it was an active parameter. Not sure if this was one. All the best: Rich Farmbrough, 20:22, 19 November 2019 (UTC).
- There were also a number of templates where I was proactively adding a date to instances as I came across them, before it was an active parameter. Not sure if this was one. All the best: Rich Farmbrough, 20:22, 19 November 2019 (UTC).
- That explains it. The mystery of why there are entries for 2007/2008, there was a different template merged into this one that did support the date para.[9]. -- GreenC 23:13, 25 October 2019 (UTC)
WikiProject NRHP project tracking tables and maps
The WikiProject NRHP has an extensive javascript-based system supporting wp:NRHPPROGRESS, which is a Wikipedia-space work status report with maps. There are related programs run occasionally by User:Magicpiano and/or User:TheCatalyst31 (using related User:NationalRegisterBot), but the main update is one program which those two editors plus myself can run as often as we wish, using our own computer devices. I personally would run it every night, if I could, but the main program run takes a long time to run at least on my own computer devices and during normal editing hours (and then often fails to complete, for me). And then it would further cost me about 10 minutes (if I was well-practiced and trying hard) of focused work to regenerate the four associated maps, whose production is largely but not completely automated, so I don't usually do that. There are numerous editors, however, who would be interested to see regularly (daily, i think) complete reporting, like to be able to see the impacts of their own article creation or improvement efforts reflected in the maps. Would any regular bot editor be willing to set up a more centralized system, with update runs on some server to be scheduled to run sometime late every night? Some info about how a user like me can run the main updating script and generate new graphs is written out at Wikipedia:WikiProject National Register of Historic Places/Progress/Instructions and/or reflected in discussion at wt:NRHPPROGRESS.
Also, right now the updating program is not running for me and for TheCatalyst31, apparently due to some edit tokens change which is affecting a lot of user scripts right now. Per wt:NRHP#Is anyone else having trouble with the scripts lately? and Wikipedia:Interface administrators' noticeboard#editToken --> csrfToken migration. Even if the edit token issue is fixed, there still remains more than 3,650 minutes of editor time to be saved per year, and other advantages, which could be achieved from having a centralized run set up.
The overall system has other functions, too, by the way, supported by further scripts, and also could need to be developed more in some ways now and in the future. It would be great if one or a few regular bot editors would be willing to consider setting up a daily run plus be willing to assist on some refinements. :) --Doncram (talk) 15:41, 20 October 2019 (UTC)
- Not saying yes, but could you break down, functionally what this will do? Hasteur (talk) 17:16, 20 October 2019 (UTC)
- A daily update would run the javascript which I can usually run by just hitting "Update Statistics" button on that page (which displays for me because my vector.js is set up to recognize it, i.e. it has "importScript('User:Magicpiano/NRBot/UpdateNRHPProgress.js');" ). I used to run a version of that located elsewhere; Magicpiano took over operating the script several years ago and has maintained that version. Last time I ran it successfully, it implemented this diff on the page, which updated various numbers. It consults all the separate county- and city-level NRHP list-articles in mainspace to do this, and its result is to update the Wikipedia-space work status page (and not to make any change in mainspace). Also a daily update would generate four new maps to replace the ones at Commons, which display on that page. To do that, I run the javascript which partly generates new map images, which I would run by hitting the other button, "Generate SVG Output", on that page (running 'User:Magicpiano/NRBot/NRHPmap.js'). That script updates Wikipedia:WikiProject National Register of Historic Places/Progress/SVG page, which has four data sections for the four maps. I would create new complete map files by copy-pasting those data sections into copies of the SVG map files, then upload those to commons to replace the files there. E.g. one is File:NRHP Articled Counties.svg, whose edit history reflects hundreds of updates, all done manually. Hopefully a bot could generate complete new files, i.e. concatenate starting part of a file, plus the updated data part, plus a closing part of file. And post to Commons. The process to do this manually is described at Wikipedia:WikiProject National Register of Historic Places/Progress/Instructions#Map update process.
- I have posted this request, but Magicpiano would have to be on board too. Rarely but sometimes the script fails, like if the system of NRHP county list-articles in mainspace has been altered in some significant way, and the script then needs to be amended by them. They should be contacted and be willing to be involved, if it does fail. I would hope centralization and auto-running of the updates would help, would save time for, Magicpiano, who does many other things for NRHP coverage development. Hopefully they don't mind my initiating this request. --Doncram (talk) 18:24, 20 October 2019 (UTC)
- That really doesn't tell me anything useful. How I think the script builds the page: Ex I see the National totals are made up of State totals, and State totals are made up of "subdivision/county" totals. It appears that the subdivision totals come from the linked page/section, and then from there it appears for each line: Illus is generated from the count of Non-empty Image rows, % Illus is based on the Illus count divided by Total count, Art is 1 if there is a non-redlink to the individual item, % Art is Art divided by total. Stubs is 1 if there is at least 1 Stub template on the article. NRIS is counted as 1 if {{NRIS-only}} is on the page. Start+ is 1 if the WP NRHP talk page banner lists class as not redirect, stub, unass, or blank. %Start+ is Start+ divided by Total. Unass is the total of pages that have the WP NRHP talk page banner that have an unaddressed or blank class. Not sure what Untag evaluates as, though I think it's not havving the WP NRHP banner. Net quality appears to be
- rounded to the nearest tenth. If I have that right, it's relatively straight forward to build this. I want to first work at automating the table generation. Once we nail that down, we look at populating the SVG template boxes at Wikipedia:WikiProject National Register of Historic Places/Progress/SVG, and finally we can look at getting a bot provisioned at Commons to extract the data from enWiki and move it over as SVG content to Commons. Hasteur (talk) 19:09, 20 October 2019 (UTC)
- S- StartPlus, St - Stubs, Un - Unassessed, Ua - Untagged, NRIS - "NRIS Only" citing, Ill - Illustrated Hasteur (talk) 19:11, 20 October 2019 (UTC)
- I think your understanding is correct. About Untag, that is indeed about not having the WP NRHP banner. Perhaps the only thing you don't comment about explicitly in the calculations is incorporating the adjustments for duplicates in the totals. Duplicates occur when a NRHP site overlaps into multiple county, city, or state areas. The information about those is reflected in explicit rows, e.g. within the Alabama section there are four rows for Jefferson County: one for city of Birmingham's table, one for rest of Jefferson County, one for duplicates spanning the city-county border, and one for total. In a daily update, the bot also goes to, I guess, Wikipedia:WikiProject National Register of Historic Places/Progress/Duplicates to look up the appropriate values, e.g. for Jefferson County, AL. The duplicates' row information there doesn't change often and never changes by much, so only occasionally is the information updated by a run by User:NationalRegisterBot. (Frankly, in my personal opinion, all usage of "duplicates" rows and all related programming could perhaps be dropped. Then the NRHPPROGRESS report's state and national totals would be slightly over- or under-stated, which IMHO would not be a problem. One could still discern progress being made, and one could zero in wherever of interest. It happens to be the case that detailed treatment of duplicates was built into the system, though.)
- It's great that you see how to do all this, in principle, including having a Commons bot run to produce complete map files. That alone would be a great advance, IMHO.
- Would you be planning to re-program this into something different than javascript, or would you essentially move the javascript scripts over to somewhere else? I am just wondering whether Magicpiano or anyone else familiar with the javascript (not me) would still be able to have a parallel version of the scripts, which they could run independently on occasion, like if they were testing out some change to be made. I am not sure if this would matter to them, but I think they would want to know if this is being changed to a different programming language that they are not familiar with. They could well have a view about switching over or not, depending. --Doncram (talk) 00:52, 21 October 2019 (UTC)
- That really doesn't tell me anything useful. How I think the script builds the page: Ex I see the National totals are made up of State totals, and State totals are made up of "subdivision/county" totals. It appears that the subdivision totals come from the linked page/section, and then from there it appears for each line: Illus is generated from the count of Non-empty Image rows, % Illus is based on the Illus count divided by Total count, Art is 1 if there is a non-redlink to the individual item, % Art is Art divided by total. Stubs is 1 if there is at least 1 Stub template on the article. NRIS is counted as 1 if {{NRIS-only}} is on the page. Start+ is 1 if the WP NRHP talk page banner lists class as not redirect, stub, unass, or blank. %Start+ is Start+ divided by Total. Unass is the total of pages that have the WP NRHP talk page banner that have an unaddressed or blank class. Not sure what Untag evaluates as, though I think it's not havving the WP NRHP banner. Net quality appears to be
probably unnecessary stuff about how the system works
|
---|
BEGIN probably unnecessary stuff: I hope this following won't be too much; i think you may probably ignore all of this passage. But I am wondering if it would be necessary or helpful for you to understand all other programming parts of the system. FYI those include:
|
- Not to put words in the requester's mouth, but what he seems to want is to have the updating of WP:NRHPPROGRESS (both data and maps) to be more automated than it is. It would be an error to describe the collection of scripts used by editors in the NRHP project as a "system" -- they are just a collection of scripts, most of which are unrelated to each other.
- The script that updates the data at WP:NRHPPROGRESS is User:Magicpiano/NRBot/UpdateNRHPProgress.js. I don't know why it takes Doncram so long to run it; my runs rarely take more than five minutes on two-year-old hardware. The bulk of this script was written by Dudemanfellabra, who withdrew from Wikipedia a few years ago. I have maintained it (with minimal changes to its data-collection functions) since then. Some of those changes have been mandated by changes in the wikimedia ecosystem, such as the recent csrfToken business, while others have been focused on improved error handling and logging, which the code I inherited did poorly.
- Related to this script is the SVG map generator, User:Magicpiano/NRBot/NRHPmap.js. This script parses the progress page and produces fragments of SVG, which need to be edited on the editor's computer to produce the actual image files. This process is documented at Wikipedia:WikiProject National Register of Historic Places/Progress/Instructions, and takes me about 10-15 minutes to do all four maps. This process could be more fully automated, since it basically involves pasting the created fragments into a boilerplate surround, and then uploading the resulting SVG to Commons.
- The duplicates data is produced by NationalRegisterBot, which is run irregularly by me under User:NationalRegisterBot. (It does not need to run frequently, since the two things it does, duplicate gathering and NRIS-only tagging, don't actually change a great deal in any given month.)
- The principal issue militating against fully automating some of these processes is a somewhat fragile dependence on the state of the underlying pages. The progress update script parses NRHP list pages, NRHP article pages, and NRHP article talk pages in its data gathering process, and changes to them can break the update script. For example, it once broke when some "National Register of Historic Places listings in X" pages were moved (with redirects), because it made fragile assumptions (since corrected I believe) about the relationships between page names and NRHP locations. These sorts of issues lurk in the code base. Some types of changes, notable decisions around splitting long NRHP lists, have to be reflected in the content of WP:NRHPPROGRESS before the script is run, something editors who execute such moves may be unaware of. The script runner is of course unaware of the changes occurred, and it then fails, requiring a diagnose-and-fix process. (This fragility also affects NationalRegisterBot.)
- That said, it may be possible to re-engineer what these two scripts do to be more resilient in the face of those sorts of changes, and in changes editors make to the pages it reads that break its parser (which has also happened on more than one occasion). This work is of a scope I am unwilling to tackle. I am willing to impart what I know about how the existing scripts work to someone willing to take on such a task. Magic♪piano 03:01, 21 October 2019 (UTC)
- I can't, don't disagree with any of that. About run-times for me, I ran NRHPPROGRESS update bot at about 3:00 am U.S. eastern time last night, when it should have been fast, it reported 4 min 52 sec elapsed time before starting to save the result (at least it finished, thanks for fixing the edit token issue!). It would be nice if the bot recorded the elapsed time into the edit summary it writes, BTW. In recent months at peak times it has taken 20 minutes or more, and clearly was stalled, before I would kill the process and try again later, and maybe have to try later later. Sometimes that was frustrating, it seemed i had to keep cursor in the window and not do anything else, or else it would run even slower. It could take as little as about a minute, some other times. If an update bot could be scheduled to run late at night and be pretty much guaranteed to complete, runtime length wouldn't matter at all.
- I get it that some of the scripts are not central to any system, but the duplicates script is central. I understand it records its resulting data table or whatever into hidden comments or section(s) within the page(s) it writes, which the daily update bot goes to consult. The overall system would break down if this javascript update bot did not run occasionally, so if the others are moved over to Python, this should too, probably. And/or, it would seem good to me if the data table it generates could be written out visibly, and have the update bot draw from the visible data. That might allow editors to make manual edits (relatively few ever required) there, so running the duplicates script would be more optional. At least (all?) results of the main update script and the SVG script are visible. --Doncram (talk) 20:58, 21 October 2019 (UTC)
- @Doncram: "How do we eat hippopotamus? One bite at a time." Let's first focus on getting the Data table updates automated. Then we can focus on getting the SVG snippets, then we can focus on the commons bot to update the images. As to re-engineering, I'm planning on re-implementing this in Python3 with the Pywikibot framework. My goal is to both develop the script that will do the data table updates by driving down in the data oriented format. I can invision adding message to where if one of the subunit pages fails to parse correctly, the bot reports to the NRHP Talk page that the subunit page is malformed and needs review. This will help the project stay on top of the indexes. I'm a python programmer by trade and my goal is to leave enough comments and data behind so that if future changes are needed when I shuffle off my mortal coil (or others want to make improvements) we won't have to unwind all the page again. By also picking pywikibot, I'll be able to run it on the Wikitech Toolforge compute cluster which will make it significantly faster to run these numbers instead of streaming data down to your local desktop and then pushing data back to Wikipedia. Hasteur (talk) 03:32, 21 October 2019 (UTC)
- @Magicpiano: See above, but my goal is to reach out behind the scenes from Toolforge so that we don't have as much data transiting the wire. Also if we fail to parse one of the indexes, we can add a notice (either at a purpose built landing box like User:HasteurBot/Notices or at the NRHP project talk page) to flag down attention. Also if we have one of the index pages loose some significant percentage of it's items, the bot could send an additional notice that the membership appears to have gone down quite a bit and might have been split unsuccessfully. Hasteur (talk) 03:38, 21 October 2019 (UTC)
- Those types of messages sound good, perhaps could best be sent to the Talk page of the NRHPPROGRESS report, rather than the main Talk page of WikiProject NRHP page. Hasteur, about going forward this way, I appreciate your willingness, and I guess I hope that you will, though not if Magicpiano has reservations. I worry somewhat about future programmers' willingness in the future to modify/develop the system in Python, but same worry applies for Javascript. I have little expectation that I could ever contribute in programming tweaks in the future, but I vaguely think I'd have a tad more chance in Python (more in the freeware world, right?) than Javascript. User:TheCatalyst31, could you comment? --Doncram (talk) 20:58, 21 October 2019 (UTC)
- I have experience with both Python and JavaScript, so that's not an issue for me, though I'm not sure how much time I'd have to help anyway; in practice, I haven't actually done much work with the bot. (I'm also a programmer by trade, but in my case that usually means I want to do things other than programming outside of work.) TheCatalyst31 Reaction•Creation 00:08, 22 October 2019 (UTC)
- If Hasteur's plan is to basically rewrite the functionality in Python, I have no particular objection. My concerns lie in the existing code base, so a rewrite in the context of a resident bot is likely to be an improvement with respect to its fragility issues. Magic♪piano 12:13, 24 October 2019 (UTC)
- Those types of messages sound good, perhaps could best be sent to the Talk page of the NRHPPROGRESS report, rather than the main Talk page of WikiProject NRHP page. Hasteur, about going forward this way, I appreciate your willingness, and I guess I hope that you will, though not if Magicpiano has reservations. I worry somewhat about future programmers' willingness in the future to modify/develop the system in Python, but same worry applies for Javascript. I have little expectation that I could ever contribute in programming tweaks in the future, but I vaguely think I'd have a tad more chance in Python (more in the freeware world, right?) than Javascript. User:TheCatalyst31, could you comment? --Doncram (talk) 20:58, 21 October 2019 (UTC)
Updating municipality websites and mayors of Colombia
I have three requests which might ideally be handled by the same bot, if that exists:
- Yesterday, general elections happened in Colombia, where new mayors were chosen. Probably the Spanish language wiki has already updated them in the municipality pages or that is being done (manually?). Over the last years I have added or updated a good 200 of them manually, but far from all 1200+...
- The same for the municipality websites. They were moved all to new URLs without redirecting them by the Colombian authorities, and the ones in the municipality pages have been updated in my former bot request. But, I used those same websites in many articles about mostly but not restricted to the Muisca Confederation and those are still all dead links.
- I have been adding the basic information from those official websites, including including them as such in the stubs I worked on, but it is an impossible task to do that for all municipalities. Es.wiki and the official websites have a lot of basic information (elevation, area, population+year (at es.wiki), year of foundation, founder, maps, coat of arms and flag update checks, add photos if they are on Commons/other language wikis, etc.) that should be possible to be added when properly handled by a bot (there is a bit more going on in these towns then the many added Iranian villages with 4 goats and 1 owner...).
It would be ideal if one bot could combine these efforts, to not have to do this task manually like I did for instance with Susacón as opposed to the untouched La Victoria, Boyacá. Ping me for discussion because I don't watch my 'watch'list often. Tisquesusa (talk) 14:52, 28 October 2019 (UTC)
- Tisquesusa: URLs can be a special case due to complications with unwinding and/or adding archive URLs and/or
{{dead link}}
tags, and other things. Recommend URL requests at WP:URLREQ, assuming there is a way to determine what the new URL will be. -- GreenC 17:44, 28 October 2019 (UTC)- @GreenC: The bot did the URL job correctly, but restricted to the municipality infobox. For instance Zipacón has the correct new address under "Official address", but the same address (the information site) used as reference does not (nor does it have a dead link tag). So that code exists already, it is just a matter of applying it to all articles and everywhere within them, not only the municipality infobox link. That is the easiest task of the three requests. Tisquesusa (talk) 18:43, 28 October 2019 (UTC)
- An extra bit for the "<mun address>/informacion_general.shtml" needs to be added. The new address is "<mun address>/tema/municipio".
- We have archive bots that convert dead links to archived versions, it runs continually checking for dead links and converting them to archive URLs automatically. Thus can never be sure if a URL has been archived or not without inspecting it, for 1200+ articles no one knows. BTW I checked Zipacón and the last time it was edited was 2017, and don't see a URL conversion. -- GreenC 22:19, 28 October 2019 (UTC)
Bot to fix >2900 broken URLs
Recently, all >2900 transclusions of {{WTA}} has been broken due to breaking changes made by the Women's Tennis Association to their URL format. The new format for the URL requires the name of the player to be added at the end of the URL. The best way I believe for this to be done is to add a parameter to the template that contains the name in the format the WTA requires for the URL to work. The bot should replace {{WTA}} with {{WTA|url-name=firstname-lastname}}, leaving any other parameters alone. I've created a list of names to add to the new parameter that should fix most of the broken URLs instantly (the ones that remain broken due to the WTA's inconsistent handling of diatrics can be fixed manually). Iffy★Chat -- 20:25, 13 November 2019 (UTC)
- It seems more efficient to update Wikidata so all interested wikis can pull it from there. There is discussion at wikidata:Property talk:P597#URL is not working. PrimeHunter (talk) 13:00, 16 November 2019 (UTC)
Not done Withdrawing - Wikidata Ids are being updated, and I've already updated the template to use the new Ids. Iffy★Chat -- 13:43, 6 December 2019 (UTC)
Requesting a bot that goes through all REDIRECTs and eliminates all categories not hidden
Right now there is a situation where some have gone through a rather large number of articles at Index_of_Babylon_5_articles and turned them into redirects, but they left the category tags intact. So I look at Category:Babylon 5 and can't tell which blue links are articles and which ones are just redirects. I have seen this problem elsewhere at times as well. The purpose of categories is to help find articles, not redirects. The only categories redirects should have are the "Hidden categories" which list which sort of redirects they are. Dream Focus 02:16, 11 December 2019 (UTC)
- Dream Focus, this would definitely be a CONTEXTBOT. Quoting the relevant guideline:
There are some situations where placing a redirect in an article category is acceptable and can be helpful to users browsing through categories.
‑‑Trialpears (talk) 02:22, 11 December 2019 (UTC) - (edit conflict) Dream Focus, I know it doesn't deal with the actual issue, but redirects are in italics on Category pages. If you've got Anomie's link classifier, they're also in green (but clearly based on your concerns you don't, and that wouldn't help the casual reader). Primefac (talk) 02:22, 11 December 2019 (UTC)
- @Dream Focus: if you want to know what are redirects and what isn't at a glance, check User:Anomie/linkclassifier.js. Headbomb {t · c · p · b} 02:28, 11 December 2019 (UTC)
- My screen is small and my vision not that great so I never noticed that some were in italics before. Anyway, there is no possible reason anyone would look at a category listing of a notable series just to find redirects. No reason to have them listed there. Perhaps a bot could put all the redirects into a different category entirely or sub-category of an existing series to just keep them out the way entirely. Dream Focus 02:53, 11 December 2019 (UTC)
- Dream Focus, a AWB run removing redirects from this category may be appropriate, but systematically removing all redirects from all article categories would simply be against consensus. A bot is not the right tool in this case. ‑‑Trialpears (talk) 03:02, 11 December 2019 (UTC)
- Not without consensus to do so. WP:AWBRULES still apply there. Headbomb {t · c · p · b} 03:08, 11 December 2019 (UTC)
- Oppose I usually mention Honey Lantree in such situations, so see Wikipedia talk:Categorizing redirects/Archive 2#categorizing redirects as though they were articles. --Redrose64 🌹 (talk) 08:54, 11 December 2019 (UTC)
Renaming class of article on talk page
A fair number of articles have been turned into redirects over the years, but most people seem to forget to change the classification of the article on its talk page. I've actually seen a couple former GA's merged into other articles or turned into redirects for sections, etc. But they still have the GA rating despite not currently having any textual content. This problem should be pretty easy to fix imo, because it's rather easy to parse for the difference between a redirect and a namespace article, so a bot with these capabilities wouldn't be too hard to code for, I'd imagine. Jerry (talk) 22:01, 31 October 2019 (UTC)
- For Good Articles, Wikipedia:Good articles/mismatches is updated weekly. It gets pretty complicated for GA anyway. -- GreenC 22:23, 31 October 2019 (UTC)
- I'm well aware of that page. That's actually how I've been able to find such articles. Jerry (talk) 22:43, 31 October 2019 (UTC)
- If the talk page is that of a redirect, the
|class=
should either be removed or set to the null value. It will then autodetect. --Redrose64 🌹 (talk) 09:23, 2 November 2019 (UTC)- Disagree, I find redirect class to be useful. Jerry (talk) 22:55, 3 November 2019 (UTC)
- Why is autodetection not useful? --Redrose64 🌹 (talk) 11:58, 4 November 2019 (UTC)
- @Redrose64: I've misunderstood. I meant that redirect-class is a useful categorization, not that autodetection was not useful. Jerry (talk) 23:43, 8 November 2019 (UTC)
- Disagree, I find redirect class to be useful. Jerry (talk) 22:55, 3 November 2019 (UTC)
- If the talk page is that of a redirect, the
- I'm well aware of that page. That's actually how I've been able to find such articles. Jerry (talk) 22:43, 31 October 2019 (UTC)
Adding module dependencies to doc page
Is it possible to have a bot that checks module code in the Module namespace and checks for module dependencies in require()
and mw.loadData()
, extracts the module that is being used and adds it to the module doc page to the {{Lua}} template? Ideally this should be an automatically recurring bot. This will help find uses of a module through a user-friendly method via "What links here". --Gonnym (talk) 15:08, 1 November 2019 (UTC)
- Gonnym, since the pages will almost always include a call to {{documentation}}, why not just change that template's Lua module and have it automatically add it at the top of the template? Primefac (talk) 12:25, 9 November 2019 (UTC)
- Just to make sure I understand what you meant, modify Module:Documentation to automatically detect the module dependencies and add the {{Lua}} code? --Gonnym (talk) 14:44, 9 November 2019 (UTC)
- I don't see why it is necessary to put {{lua}} boxes on modules at all. * Pppery * it has begun... 17:13, 9 November 2019 (UTC)
- Really? Just a few reasons that come to mind for reasons why you'd want to document modules you use on the module documentation page. You can't click on links in the module code, so having a link to dependency modules from the documentation makes it easier. Also, as far as I know (and correct me if I'm wrong), "what links here" doesn't "see" these usages so instead you'd have to use a search query. Could you give me any reason why someone might be against documenting module usage? --Gonnym (talk) 17:17, 9 November 2019 (UTC)
- OK, that does make sense as a valid reason. I've made some changes to Module:Lua banner to fix inaccurate wording in the resulting boxes. Several suggestions, though.
- Don't include super-common modules like Module:Arguments or Module:Yesno, for the same reason that Module:Check for unknown parameters usage isn't documented on every template with unknown parameters.
- If this is done by bot, create a redirect like Template:Module dependencies and use it; the name Template:Lua does not make sense here.
- A similar bot task would be useful for templates that use Lua but lack documentation of it.
- * Pppery * it has begun... 17:33, 9 November 2019 (UTC)
- OK, that does make sense as a valid reason. I've made some changes to Module:Lua banner to fix inaccurate wording in the resulting boxes. Several suggestions, though.
- Really? Just a few reasons that come to mind for reasons why you'd want to document modules you use on the module documentation page. You can't click on links in the module code, so having a link to dependency modules from the documentation makes it easier. Also, as far as I know (and correct me if I'm wrong), "what links here" doesn't "see" these usages so instead you'd have to use a search query. Could you give me any reason why someone might be against documenting module usage? --Gonnym (talk) 17:17, 9 November 2019 (UTC)
Feedback request - bot to maintain CAT:UAA and similar categories
(copied from bots noticeboard, I was told that this was a more appropriate place for the question)
Hi all, I was wondering if it would be useful to have a bot that goes through CAT:UAA and removes the category entry from users who shouldn't be listed anymore - per the category page, Accounts should be removed from this category when they have been indefinitely blocked [or] inactive for more than one week
. My thinking is that right now the category has several thousand accounts listed, most of which are very stale (from the handful I clicked on, I saw several reports which were over two years old). My general idea/pseudocode for the script is pretty simple:
- For each entry in CAT:UAA:
- If user is blocked or user last activity was more than (some time) ago:
- Remove category from user
- If user is blocked or user last activity was more than (some time) ago:
I believe AvicBot was supposed to have this on its task list, but since that was an AWB task and Avicennasis hasn't been around for a while, I thought it might be worth turning into a fully automated bot task. (some time), of course, would be configurable - probably 1 year for initial testing, followed by a shorter time (AvicBot did two weeks, I would consider that or one month pretty reasonable, one week is a bit short for my tastes and I don't see a pressing need to remove these accounts quickly). I'd also consider expanding it to some of the other categories that AvicBot was covering. And just to be clear - I want to write the bot, I'm just looking for feedback as to whether this is worth writing. Thoughts? creffett (talk) 22:33, 11 November 2019 (UTC)
discussion about location
|
---|
|
- Ugh, sorry, I just went ahead and added the question to Wikipedia:Bot_requests#Feedback_request_-_bot_to_maintain_CAT:UAA_and_similar_categories. Now that I've managed to split the discussion, would you rather I strike that request and keep the discussion here, or comment on the notifications to point them to the bot requests board? creffett (talk) 22:35, 11 November 2019 (UTC)
It doesn't look like a difficult bot, seems useful and the previous botop is no longer active. Go for it IMO. I agree when automated the time window should be considerably longer. Suggest keeping a log of removals, in case it's ever needed, noting at the top eg. "bot log of removals available" -- GreenC 20:50, 13 November 2019 (UTC)
- GreenC, thanks (and good feedback about the logging), I've prototyped the bot already (what can I say, I like coding), will submit a bot request so that I can actually test it. creffpublic a creffett franchise (talk to the boss) 21:56, 13 November 2019 (UTC)
- BRFA filed at Wikipedia:Bots/Requests_for_approval/Creffbot. creffett (talk) 01:52, 14 November 2019 (UTC)
- @GreenC: Won't the bot's contributions page function as a log of removals? * Pppery * it has begun... 01:55, 14 November 2019 (UTC)
- If that were all the bot account did. Then you might have to filter on subject-line which gets more difficult for casual users. Log files are easier to work with and on Tools basically free why not. They can be cycled every occasion and add only 1 line of code. -- GreenC 02:20, 14 November 2019 (UTC)
- Also, if it proves useful at CAT:UAA I'll probably file a BRFA to expand it to the other categories that AvicBot used to maintain, so it would be good to build in the per-category logging from the start. creffpublic a creffett franchise (talk to the boss) 16:10, 14 November 2019 (UTC)
- If that were all the bot account did. Then you might have to filter on subject-line which gets more difficult for casual users. Log files are easier to work with and on Tools basically free why not. They can be cycled every occasion and add only 1 line of code. -- GreenC 02:20, 14 November 2019 (UTC)
JL-Bot for Recognized content for Kerala
Hi, Can u help with A bot for Finding Recognized content for articles related to Kerala. Please Configure Ref: Wikipedia:WikiProject Kerala/Metrics J.Stalin S Talk 10:13, 9 January 2020 (UTC)
- Stalinsunnykvj, Category:Kerala_articles_by_quality have lists of GA, FA and FL articles related to Kerala. ‑‑Trialpears (talk) 10:15, 9 January 2020 (UTC)
- Trialpears, How to Add like this? Wikipedia:WikiProject Christianity/Recognized Content.J.Stalin S Talk 10:19, 9 January 2020 (UTC)
- Ah sorry. JLaTondre, could you help? ‑‑Trialpears (talk) 10:25, 9 January 2020 (UTC)
- @Stalinsunnykvj: The template needs to specify a template or category that marks all the project's articles. In this case, the wikiproject doesn't have its own template so it needed to use a category. I updated the configuration and ran the bot against that page. -- JLaTondre (talk) 20:56, 9 January 2020 (UTC)
- @JLaTondre:@Trialpears: Thankyou for the Help.! J.Stalin S Talk 04:04, 10 January 2020 (UTC)
- Trialpears, How to Add like this? Wikipedia:WikiProject Christianity/Recognized Content.J.Stalin S Talk 10:19, 9 January 2020 (UTC)
Quick-ish bot query
I'm looking at a very quick usage summary of DOIs accross Wikipedia. Basically, things that can be surmised from {{doi}} and the various |doi=
of {{cite xxx}}/{{citation}} templates. What I want to know is how often each prefix is used (the 10.XXXX
part of 10.XXXX/foobar
[XXXX
will always be 4 or more pure digits), put in a sorted list
Any prefix that isn't in use should just be omitted. Things can be uploaded directly at User:Headbomb/DOI usage. If it's best to do this from a WP:DUMP, or from some Quarry query, or whatever, I leave up to people who know how to do this.
Thanks in advance. Headbomb {t · c · p · b} 22:30, 24 December 2019 (UTC)
- Note that if it's simpler to do a raw string search (for the
10.XXXXX
part of10.XXXXX/foobar
), regardless of where it appears, that's also fine. Exactness of results isn't critical. Mainspace-only is fine, but you can add Draft/Template-space too if you want. Headbomb {t · c · p · b} 22:34, 24 December 2019 (UTC)- Headbomb, Done at quarry:40890 across all namespaces using the externallinks table. I'm not super confident in my SQL skills, but the relative magnitudes should be correct. --AntiCompositeNumber (talk) 17:15, 29 December 2019 (UTC)
- @AntiCompositeNumber:, thanks! I'll take a look. Headbomb {t · c · p · b} 17:19, 29 December 2019 (UTC)
- Headbomb, Done at quarry:40890 across all namespaces using the externallinks table. I'm not super confident in my SQL skills, but the relative magnitudes should be correct. --AntiCompositeNumber (talk) 17:15, 29 December 2019 (UTC)
A bot to replace curly ("smart") quoted with straight ones
The MoS is explicit on using straight quotes, but imported text often has some form of curly or back-tick quotes. What about a bot that fixes these? --John Lunney (talk) 14:00, 24 December 2019 (UTC)
- This task would probably fail WP:CONTEXTBOT. Sometimes editors incorrectly use curly quotes in place of the ʻokina, the grave accent mark, or the prime mark, and we should not replace those. We should also not replace curly quotes used deliberately, as in quotation mark. Changing curly to straight quote marks can also change formatting of text by introducing bold or italic formatting where it may not have been intended. – Jonesey95 (talk) 15:41, 24 December 2019 (UTC)
- Dumb question, Jonesey995, and I have zero expertise in the bot world, so please forgive my ignorance, but couldn't a program be set up such that the bot looked at the categories in the article, and if Polynesian-type categories existed, the bot would skip over certain punctuation? Seems doable, although what do I know... Regards, Cyphoidbomb (talk) 05:43, 27 December 2019 (UTC)
- I fix lots of curly quotes, but I always check my edits manually to try to ensure that I am not replacing curly marks that look like curly quotes but are actually something else. In a perfect world, some sort of category-based approach might work, but your proposal would fail on quotation mark and many other articles that would be difficult for a human to predict. Luckily, the vast majority of readers are able to parse curly quotes just fine until we gnomes are able to get to them. – Jonesey95 (talk) 06:18, 27 December 2019 (UTC)
- Dumb question, Jonesey995, and I have zero expertise in the bot world, so please forgive my ignorance, but couldn't a program be set up such that the bot looked at the categories in the article, and if Polynesian-type categories existed, the bot would skip over certain punctuation? Seems doable, although what do I know... Regards, Cyphoidbomb (talk) 05:43, 27 December 2019 (UTC)
- Not a good task for a bot. Jonesey95 has pretty much hit it on the head; too many issues with context, unless you can guarantee that the curly quotes will only be in one type of article or location. Primefac (talk) 16:13, 27 December 2019 (UTC)
Ref Name error with the National Register of Historic Places.
Please edit the "NRHP" parameter for Ref Name. It's supposed to redirect tohttp://nrhp.focus.nps.gov/natreg/docs/All_Data.html; however, it now redirects to "https://npgallery.nps.gov/nrhp/Download?path=/natreg/docs/All_Data.html", due to a (presumably recent) URL change. The big problem here is that Google Chrome interprets this as malicious behavior and gives me a "your connection is not private" error message. Is anyone else getting that? See, for example, the first citation on the Wikipedia article for the city hall of Norwich, Connecticut. HighwayTyper (talk) 19:57, 6 January 2020 (UTC)
- The reason Chrome is unhappy is that the certificate at NPS.gov which makes the connection secured may be an invalid certificate (
NET::ERR_CERT_COMMON_NAME_INVALID
). That particular part must be fixed by the website owner. --Izno (talk) 21:41, 6 January 2020 (UTC)- This is the wrong page for this bug report. I have created a thread at Template talk:NRISref. – Jonesey95 (talk) 03:47, 7 January 2020 (UTC)
de-orphan bot?
Redundant Already one of User:JL-Bot's tasks. creffpublic a creffett franchise (talk to the boss) 13:39, 22 January 2020 (UTC)
Is there currently a bot that removes the "orphan" tag from articles once they have inbound links from mainspace? I saw Wikipedia:Bots/Requests for approval/Addbot 18 (and 16), but that bot hasn't run in a long time. If not, I can slap something together, but wanted to make sure there isn't a bot doing the task already.
Pseudocode: for each article in Category:All orphaned articles if article inbound links from mainspace >= 3: remove orphan tag
Probably some optimizations to be made (since the category contains 106k articles), but that's the gist of the idea. Would also need to make sure not to count redirects and disambigs as inbound links, and if desired, I could raise the threshold to something greater than what a human would use. creffpublic a creffett franchise (talk to the boss) 20:11, 21 January 2020 (UTC)
- Pretty sure previous consensus is that bots shouldn't remove orphan tags from pages with fewer than 3 links. Here's what the AWB logic is. Headbomb {t · c · p · b} 20:15, 21 January 2020 (UTC)
- Seems reasonable enough, will amend the above pseudocode. creffett (talk) 22:58, 21 January 2020 (UTC)
- It's really not a big issue. See Wikipedia:Database reports/Orphans with incoming links. ‑‑Trialpears (talk) 23:07, 21 January 2020 (UTC)
- That AWB logic ought to be changed - WP:ORPHAN#Criteria indicates that the tag should only be placed on articles with zero incoming links. ♠PMC♠ (talk) 23:44, 21 January 2020 (UTC)
- The issue for automated edits is that the criteria specifically excludes certain types of pages as counting for links. As it would be prohibitive to check each page linking to see if its a relevant link or not, the consensus was to use a higher number for automated edits and leave it up to users to remove ones below that. -- JLaTondre (talk) 01:23, 22 January 2020 (UTC)
- Hmm. A little annoying for those of us who work in the trenches of CAT:O but I see the logic. ♠PMC♠ (talk) 15:03, 22 January 2020 (UTC)
- The issue for automated edits is that the criteria specifically excludes certain types of pages as counting for links. As it would be prohibitive to check each page linking to see if its a relevant link or not, the consensus was to use a higher number for automated edits and leave it up to users to remove ones below that. -- JLaTondre (talk) 01:23, 22 January 2020 (UTC)
- That AWB logic ought to be changed - WP:ORPHAN#Criteria indicates that the tag should only be placed on articles with zero incoming links. ♠PMC♠ (talk) 23:44, 21 January 2020 (UTC)
- It's really not a big issue. See Wikipedia:Database reports/Orphans with incoming links. ‑‑Trialpears (talk) 23:07, 21 January 2020 (UTC)
- Seems reasonable enough, will amend the above pseudocode. creffett (talk) 22:58, 21 January 2020 (UTC)
- JL-Bot is approved for this. That task runs every couple weeks. -- JLaTondre (talk) 01:23, 22 January 2020 (UTC)
- JLaTondre, cool, I hadn't met JL-bot yet. Nothing to do here, then. creffpublic a creffett franchise (talk to the boss) 13:39, 22 January 2020 (UTC)
Bot to find biographies with error in date of birth/death
Hello, while working on different tasks I often come across numerous mistakes relating to the date of birth or death in biographies. I thought it could be a useful task for a bot to either add a hidden category to the articles which contain errors, or generate/update various lists of these articles. For example, here are some possible mistakes which a bot could check for both date of birth and death:
- Different dates used in infobox and lede (e.g. lede says born 23 January 1980, infobox says 23 March 1980)
- Inconsistent date formats in the infobox and lede (e.g. infobox uses DMY, while the lede uses MDY)
- Date exists in the infobox, but not the lede
- Date exists in the lede, and an infobox is present at the top of the article, but the date is not included
- Poorly formatted dates (e.g. "01 March 1980", "30 June, 1976", "May 16 1950", "September, 19, 1922", "1930-11-15", "10/15/1920")
- Using wrong markings/seperators per MOS:DOB (e.g. "13 July 1920 - 7 December 1992", "born 15 November 1944, died 8 January 2002", "15 September 1992 –", "born on July 16, 1840", "b. March 30, 1901", "d. 3 April 1933", "* 13 August 1951; † 30 May 2002")
- Infobox does not use a date template (see the list at Template:Birth, death and age templates under "Birth, death, age")
- Any dates which are in the future
- Place of birth/death in opening brackets (this is done in a lot of biographies, but violates Wikipedia:Manual of Style/Biography#Birth date and place)
- Formatting of pronunciation/alternate spellings/birth names/foreign spelling prior to date of birth/death (some articles do not format this correctly, or will use two sets of brackets in the opening sentence; an example of correct formatting is at Andriy Shevchenko)
If not possible to do all of these, the most helpful tasks would be some of the first three or four listed, but especially the first. Dates can be somewhat tricky, given that a wide variety of formatting can be used, and precise dates are not always known (sometimes only the month, year or circa is given). Any help with this idea would be greatly appreciated. S.A. Julio (talk) 00:51, 28 November 2019 (UTC)
Template:R from DOI prefix-related stuff
I'm trying to build Category:Redirects from DOI prefixes, but it's pretty tedious to gather all the information by hand.
In general, we can query Crossref prefixes on the REST API (etiquette). For example, https://api.crossref.org/prefixes/10.1016 returns
status "ok" message-type "prefix" message-version "1.0.0" message member "http://id.crossref.org/member/78" name "Elsevier BV" prefix "http://id.crossref.org/prefix/10.1016"
Cross isn't the only service out there, so it won't find everything, but it's the biggest. The name of the DOI registrant (i.e. the publisher/imprint) can be found in name
. This is what I'm interested in.
The request would be for the bot to crawl the following DOI prefixes
- 10.1001 to 10.9999
- 10.10000 to 10.39999
Then
- Get
name
from the query - Create a table (or set of tables) similar to
DOI Prefix | |name= CrossRef registrant |
|registrant= {{R from DOI prefix}} |
Target |
---|---|---|---|
10.1001 | American Medical Association (AMA) | – | American Medical Association |
10.1002 | Wiley | – | Wiley (publisher) |
10.1003 | – | – | – |
10.1004 | – | – | – |
10.1005 | – | – | – |
... | ... | ... | ... |
10.1063 | AIP Publishing | AIP Publishing | American Institute of Physics |
... | ... | ... | ... |
10.39999 | ... | ... | ... |
Grouping 1000 doi prefixes per subpage would likely be a good way to organize, e.g.
- User:Headbomb/DOI/10.1000
- User:Headbomb/DOI/10.2000
- ...
- User:Headbomb/DOI/10.38000
- User:Headbomb/DOI/10.39000
Since this would be a userspace bot, there wouldn't be any need to get approval for this. There would likely be a follow up request to create the appropriate redirects down the road, but basically this would be after human review and massaging of the data to ready for bot use. Headbomb {t · c · p · b} 17:35, 12 January 2020 (UTC)
- JLaTondre (talk · contribs) would you be interested in this? It's a bit different from JL-Bot's existing tasks, but it's related to it. Headbomb {t · c · p · b} 03:00, 20 January 2020 (UTC)
- My time is limited and you have already given me a backlog of requested changes for the existing citation task. That said, this is pretty easy to do. So all comes down to priorities between which tasks you want worked. -- JLaTondre (talk) 01:27, 22 January 2020 (UTC)
- I've got no real specific priorities at the moment. I'd say to do the low-hanging fruits first, and then whatever you feel like tackling. Headbomb {t · c · p · b} 01:36, 22 January 2020 (UTC)
- Okay, I'll do this next then. -- JLaTondre (talk) 22:49, 22 January 2020 (UTC)
- @JLaTondre: Feel free to upload things at User:JL-Bot/DOI/10.1000 etc instead. Headbomb {t · c · p · b} 23:53, 22 January 2020 (UTC)
- Okay, I'll do this next then. -- JLaTondre (talk) 22:49, 22 January 2020 (UTC)
- I've got no real specific priorities at the moment. I'd say to do the low-hanging fruits first, and then whatever you feel like tackling. Headbomb {t · c · p · b} 01:36, 22 January 2020 (UTC)
- My time is limited and you have already given me a backlog of requested changes for the existing citation task. That said, this is pretty easy to do. So all comes down to priorities between which tasks you want worked. -- JLaTondre (talk) 01:27, 22 January 2020 (UTC)
- Done -- JLaTondre (talk) 00:22, 28 January 2020 (UTC)