Wikipedia talk:Wikipedia Signpost/Single/2013-02-04
Comments
The following is an automatically-generated compilation of all talk pages for the Signpost issue dated 2013-02-04. For general Signpost discussion, see Wikipedia talk:Signpost.
Featured content: Portal people on potent potables and portable potholes (1,012 bytes · 💬)
Aww. I saw expecting a Portal reference in the title. GamerPro64 01:51, 6 February 2013 (UTC)
- Been there, done that . — Crisco 1492 (talk) 02:10, 6 February 2013 (UTC)
- Well yeah but you never used "Now you're thinking with portals". GamerPro64 02:38, 6 February 2013 (UTC)
- Hmm... fair enough. Although an SNL reference (with another hidden comedy show reference) is good enough for me. — Crisco 1492 (talk) 02:42, 6 February 2013 (UTC)
- Well yeah but you never used "Now you're thinking with portals". GamerPro64 02:38, 6 February 2013 (UTC)
In the media: Star Trek Into Pedantry (3,090 bytes · 💬)
- Regarding this edit - is it a subtitle, versus intending to be read as a sentence? (:-)) -- Seth Finkelstein (talk) 07:46, 6 February 2013 (UTC)
- Interesting that Guy Keleny of The Independent chose k. d. lang as an example. Discussion of the name of the article comprises almost the entirety of Talk:k.d. lang. Yaris678 (talk) 11:10, 6 February 2013 (UTC)
- Indeed, though at least it dates back a few years rather than all having been written in the past two months. Strictly speaking, our MOS would recommend K.D. Lang, but most of us are willing to make exceptions for personal names. The name of a work of art, on the other hand, is a different story. Powers T 14:47, 6 February 2013 (UTC)
- Thank heavens there wasn't a hyphen in the title. --Surturz (talk) 20:57, 6 February 2013 (UTC)
- No, but there was another thing. Ed [talk] [majestic titan] 21:00, 6 February 2013 (UTC)
- "If the WP:MOS prevents you from improving or maintaining Wikipedia - ignore it." --Surturz (talk) 00:46, 7 February 2013 (UTC)
- It is no improvement to blindly follow styles imposed by other sources when we have a perfectly good MOS of our own. Powers T 17:56, 11 February 2013 (UTC)
- "If the WP:MOS prevents you from improving or maintaining Wikipedia - ignore it." --Surturz (talk) 00:46, 7 February 2013 (UTC)
- No, but there was another thing. Ed [talk] [majestic titan] 21:00, 6 February 2013 (UTC)
- At risk of appearing a raging xkcd fanboy... why does this article make it seem like "The Daily Dot" broke the story, prominently quoting them and mentioning them in the first sentence? They didn't. It was the xkcd comic that was published first, and the news piece was simply riding the wave. It might be my imagination, but it seems that the Signpost quotes from The Daily Dot a good bit, which is weird since it appears to be a rather small and non-notable webzine that I've never heard of elsewhere. SnowFire (talk) 17:49, 7 February 2013 (UTC)
News and notes: Article Feedback tool faces community resistance (17,460 bytes · 💬)
- Article Feedback Tool: I've said this before many, many times, and I'll say it again here. The Foundation's passion for stats and feedback does not always contribute to the improvement of Wikipedia and its management by the volunteer community. WMF projects thrust upon the Wiki have required massive community incentive to carry out the cleanups when they misfire, and reasonable solutions for improvement in quality of new articles required by community consensus have been summarily rejected by the Foundation. While a truly excellent tool in the hands of the right users, NewPagesFeed/CurationTool does not address these issues and has not improved the quantity and quality of new-page patrolling. AfT creates more work for this community than the net useful information that it is designed to produce. Someone recently stated words to the effect that the Foundation's answer to the community's claim that a car (project) is broken, is 'Keep pushing'. The only real solution is to deploy Foundation funds and resources to re-launch development of the Article Creation Workflow as a proper landing page for new users/page creators. Instead of simply wanting quantity instead of quality, the Foundation would probably rejoice at the result which would greatly reduce the burdens and backlogs in such areas as Articles for Creation, Deletions and AfD, largely resolve the issues surrounding the work of admins, and their appointment at WP:RfA, and reduce the endemic hat-collecting of minor rights. Meta areas, including WP:NPP, WP:AfC, WP:AfD, and possibly also the AfT, are a magnet to inexperienced users who cannot, or prefer not to expand or create content.Kudpung กุดผึ้ง (talk) 02:15, 6 February 2013 (UTC)
- I'm sorry that you don't feel statistics and hard data has a role in helping the volunteer community with its workload, but I must confess to being bemused by how ACTRIAL or the problem(s) with patrolling incoming content has anything to do with AFT5 (or how AFT5 can be creating work for the community when we've said 'if you guys want to turn it off, we'll turn it off' and people seem to be heading in that direction). The Foundation is not looking at quantity instead of quality; it's looking to raise the number of people who can help with maintenance tasks. And yes, sometimes this involves not only training but also making the software easier, as we did with Page Curation, or pointing people towards those tasks that need to be done. The vast majority of users do not engage in meta areas, which is why it would surprise me to find that a majority or substantial chunk of inexperienced users did; to resort to statistics for a moment, I ran a quick database query against the patrolling tables. In the last 30 days, there have been 51 patrollers with fewer than 500 edits - that's 14 percent of the patrollers overall. They are responsible for 484 patrols, which is...6 percent of patrols. If they were doing that terrible a job, presumably people would be un-reviewing their pages - and yet in the time period specified, experienced users (>= 500 edits) unreviewed...8 pages. In total. Not sure if the initial reviews were by new people or not. I'm happy to accept that quantitative and qualitative information go hand in hand, but your argument doesn't seem to be backed up by either as you've presented it. Okeyes (WMF) (talk) 18:45, 6 February 2013 (UTC)
- 'In the last 30 days, there have been 51 patrollers with fewer than 500 edits - that's 14 percent of the patrollers overall' - you've just backed it it up for me, and it's far too many. The reason their patrolls have not been reverted is probably because not many patrollers are patrolling the patrollers - and that's not what we're supposed to be doing. --Kudpung กุดผึ้ง (talk) 12:34, 7 February 2013 (UTC)
- Note also "They are responsible for 484 patrols, which is...6 percent of patrols" - really, the number of patrollers in [tranche] is not useful for looking at 'are they doing it well/badly/causing more work'; the thing that counts is "how many patrols are they doing?". If we have one patroller doing 400 patrols, that makes a much bigger impact on the value of patrolling-as-a-way-of-triaging-junk than 10 patrollers doing 5 each. So, yes, they are 14 percent of patrollers: they are responsible for a much smaller chunk of the work. I certainly agree that patrollers do not exist to answer the quis custodiet problem - but either patrollers aren't seeing bad work, in which case your argument that there is a substantial problem involving poor-quality patrolling is...confusing, or patrollers are seeing bad work, and at no point deciding it's worth undoing. Okeyes (WMF) (talk) 13:03, 7 February 2013 (UTC)
- 'In the last 30 days, there have been 51 patrollers with fewer than 500 edits - that's 14 percent of the patrollers overall' - you've just backed it it up for me, and it's far too many. The reason their patrolls have not been reverted is probably because not many patrollers are patrolling the patrollers - and that's not what we're supposed to be doing. --Kudpung กุดผึ้ง (talk) 12:34, 7 February 2013 (UTC)
- I'm sorry that you don't feel statistics and hard data has a role in helping the volunteer community with its workload, but I must confess to being bemused by how ACTRIAL or the problem(s) with patrolling incoming content has anything to do with AFT5 (or how AFT5 can be creating work for the community when we've said 'if you guys want to turn it off, we'll turn it off' and people seem to be heading in that direction). The Foundation is not looking at quantity instead of quality; it's looking to raise the number of people who can help with maintenance tasks. And yes, sometimes this involves not only training but also making the software easier, as we did with Page Curation, or pointing people towards those tasks that need to be done. The vast majority of users do not engage in meta areas, which is why it would surprise me to find that a majority or substantial chunk of inexperienced users did; to resort to statistics for a moment, I ran a quick database query against the patrolling tables. In the last 30 days, there have been 51 patrollers with fewer than 500 edits - that's 14 percent of the patrollers overall. They are responsible for 484 patrols, which is...6 percent of patrols. If they were doing that terrible a job, presumably people would be un-reviewing their pages - and yet in the time period specified, experienced users (>= 500 edits) unreviewed...8 pages. In total. Not sure if the initial reviews were by new people or not. I'm happy to accept that quantitative and qualitative information go hand in hand, but your argument doesn't seem to be backed up by either as you've presented it. Okeyes (WMF) (talk) 18:45, 6 February 2013 (UTC)
- What percentage is useful? Regarding the claim that "Between 30 and 60 percent of all feedback was rated by editors as 'useful", at Wikipedia:Article Feedback Tool/Version 5/Feedback evaluation#Is this useful? the instructions say "It is only the most entirely useless feedback that should be categorized as 'no' (not useful)." Several editors have worked together to post a random sample of 1000 feedbacks (after the anti-abuse filters and excluding anything that an editor has marked as hidden) at User:Guy Macon/Workpage. I welcome the interested reader to look at it and make their own estimate of what percentage is useful. --Guy Macon (talk) 02:43, 6 February 2013 (UTC)
- Yeah; that's actually an outdated description :). Would you like me to pull the categories/descriptions for the most recent tests? Okeyes (WMF) (talk) 17:58, 6 February 2013 (UTC)
- My personal preference is that when WMF publishes the results of a study, it should have a two prominent links to "methodology" and "raw data" on the main page of the study. In this particular case the methodology link should tell me, among other things, how the test subjects were selected, what instructions they were given, etc. The raw data should be such that if I want to I can replicate your work. This would bring a welcome level of scientific rigor to these studies. While I am waiting for that to happen, I would like to see a hatnote on anything that is outdated. --Guy Macon (talk) 18:44, 6 February 2013 (UTC)
- Obviously our raw data is not necessarily possible (some of it might be oversighted) but I'll see what I can do. Okeyes (WMF) (talk) 23:04, 6 February 2013 (UTC)
- It might be best to start with the next one. If you know that you are eventually going to publish some raw data, it is pretty easy to make a version with [Name redacted] and [Email redacted] or [Redacted for privacy reasons] as you go along. If you try to go back and do that after the fact, you always have a doubt about whether you missed one. I care far less about this particular result than I do about instilling a mentality in the WMF where they wouldn't dream about not publishing full details about methodology or not publishing raw data. And we haven't even started talking about single-blind vs. double-blind...
- Obviously our raw data is not necessarily possible (some of it might be oversighted) but I'll see what I can do. Okeyes (WMF) (talk) 23:04, 6 February 2013 (UTC)
- My personal preference is that when WMF publishes the results of a study, it should have a two prominent links to "methodology" and "raw data" on the main page of the study. In this particular case the methodology link should tell me, among other things, how the test subjects were selected, what instructions they were given, etc. The raw data should be such that if I want to I can replicate your work. This would bring a welcome level of scientific rigor to these studies. While I am waiting for that to happen, I would like to see a hatnote on anything that is outdated. --Guy Macon (talk) 18:44, 6 February 2013 (UTC)
- Yeah; that's actually an outdated description :). Would you like me to pull the categories/descriptions for the most recent tests? Okeyes (WMF) (talk) 17:58, 6 February 2013 (UTC)
- If you really want to focus on this particular study, rather than gathering raw data, somebody should start asking why WMF got "Between 30 and 60 percent useful" and my preliminary results are about 10% useful. That's a huge red flag. Is it because only one person cared enough to look at my data and post an estimate? was 200 a big enough sample? Is it because your study used 3 people? If you personally looked at the data would you come back and say that your estimate is 30%, not 10%? Is it because in both cases the person doing the evaluation was self-selected? If I saw results like that I would try to rip my own methodology to shreds and then I would try to rip the methodology of the other study to shreds. Somebody is doing something wrong. My attitude toward science: http://xkcd.com/242/ --Guy Macon (talk) 03:00, 7 February 2013 (UTC)
- Frankly, I can't answer those questions; I'm not the researcher here ;p. I'll poke Aaron and see if he can comment. Okeyes (WMF) (talk) 11:29, 7 February 2013 (UTC)
- poke received First of all, I want to direct you to the official report I wrote which includes the strategy for drawing both a random and stratified sample and the details of my methodology. I'm sad to find that this report was clearly referenced. You're not the first to have missed it. meta:Research:Article_feedback/Final_quality_assessment We had 18 Wikipedians evaluate at least 50 feedback items individually (though some evaluated more than 200). All feedback submissions were evaluated by two different people. The 30-60% number is a non-statistically founded, conservative minimization of these two evaluations/item. In the study, we found that 66% of feedback was marked *useful* by at least one evaluator ("best" in the report) and 39% of feedback was marked useful by both evaluators ("worst" in the report). Here's the breakdown of the four category classes we asked the evaluators to apply:
- Useful - This comment is useful and suggests something to be done to the article.
- Unusable - This comment does not suggest something useful to be done to the article, but it is not inappropriate enough to be hidden
- Inappropriate - This comment should be hidden: examples would be obscenities or vandalism.
- Oversight - Oversight should be requested. The comment contains one of the following: phone numbers, email addresses, pornographic links, or defamatory/libelous comments about a person.
- Note that these exact descriptions appear as tooltips in multiple places in the feedback evaluation tool. If you'd like to personally replicate the study, I'd be happy to pull another random sample for you and load it up in the evaluation tool. --EpochFail(talk • work) 15:42, 7 February 2013 (UTC)
- poke received First of all, I want to direct you to the official report I wrote which includes the strategy for drawing both a random and stratified sample and the details of my methodology. I'm sad to find that this report was clearly referenced. You're not the first to have missed it. meta:Research:Article_feedback/Final_quality_assessment We had 18 Wikipedians evaluate at least 50 feedback items individually (though some evaluated more than 200). All feedback submissions were evaluated by two different people. The 30-60% number is a non-statistically founded, conservative minimization of these two evaluations/item. In the study, we found that 66% of feedback was marked *useful* by at least one evaluator ("best" in the report) and 39% of feedback was marked useful by both evaluators ("worst" in the report). Here's the breakdown of the four category classes we asked the evaluators to apply:
- Frankly, I can't answer those questions; I'm not the researcher here ;p. I'll poke Aaron and see if he can comment. Okeyes (WMF) (talk) 11:29, 7 February 2013 (UTC)
- If you really want to focus on this particular study, rather than gathering raw data, somebody should start asking why WMF got "Between 30 and 60 percent useful" and my preliminary results are about 10% useful. That's a huge red flag. Is it because only one person cared enough to look at my data and post an estimate? was 200 a big enough sample? Is it because your study used 3 people? If you personally looked at the data would you come back and say that your estimate is 30%, not 10%? Is it because in both cases the person doing the evaluation was self-selected? If I saw results like that I would try to rip my own methodology to shreds and then I would try to rip the methodology of the other study to shreds. Somebody is doing something wrong. My attitude toward science: http://xkcd.com/242/ --Guy Macon (talk) 03:00, 7 February 2013 (UTC)
- Before I respond, let me reiterate that I think everyone at the WMF is doing a good job and has the right goals. This is a discussion about possible improvements, starting with some future study. Those who are looking for a club to beat WMF with should look elsewhere.
- meta:Research:Article_feedback/Final_quality_assessment is a very useful overview of the methodology used, but in my opinion an additional detailed methodology would be a Good Thing. (I am about to write some questions, but please don't post the answers. They are examples of what should be in a detailed methodology -- I cannot explain what I am talking about without giving examples of questions that the overview does not answer.) For an example, the overview says "We assigned each sampled feedback submissions to at least two volunteer Wikipedians." A detailed methodology would have said something like this:
- "Between 3AM and 4AM on December 24th, we posted a request for volunteers (in French) on Talk:Mojave phone booth and on the main page of xh.wiki.x.io. 43 people volunteered, and we rejected 20 of them for being confirmed sockpuppets of User:Messenger2010 (See Wikipedia:Long-term abuse/Messenger2010) and rejected 11 of them because Guy drank too much and decided he doesn't like editors with "e" in their username. That left us with Jimbo and a six-year-old girl (username redacted for privacy reasons). We then..."
- Unlike "We assigned each sampled feedback submissions to at least two volunteer Wikipedians", the above details exactly how those volunteers were chosen. Again, I don't care how they were chosen. I just want future studies to contain a detailed methodology page that answers questions like this or questions about the RNG used. To pick another example, the post above this one says "We had 18 Wikipedians evaluate at least 50 feedback items individually (though some evaluated more than 200)." That detail is not found in the methodology overview. --Guy Macon (talk) 16:51, 7 February 2013 (UTC)
- The specific 'how they were chosen' list, I can provide, actually. The purpose of the study was to compare the rating of feedback that did get rated to feedback that got missed out on, suspecting that people overwhelmingly checked feedback for high-profile articles. In order to get some consistency between the two sets of numbers, I pulled from the database a list of all users who had, in the 30 days before we started the recruitment process, monitored more than 10 pieces of feedback in some fashion. The users in question were then sent a talkpage invitation going 'would you like to participate in this?'. I appreciate that's more a specific example to highlight a general point than anything else - and I'm going to bear your general point in mind when writing up something I've been working on recently, actually - but I thought I'd address it :). Okeyes (WMF) (talk) 18:50, 7 February 2013 (UTC)
- Unlike "We assigned each sampled feedback submissions to at least two volunteer Wikipedians", the above details exactly how those volunteers were chosen. Again, I don't care how they were chosen. I just want future studies to contain a detailed methodology page that answers questions like this or questions about the RNG used. To pick another example, the post above this one says "We had 18 Wikipedians evaluate at least 50 feedback items individually (though some evaluated more than 200)." That detail is not found in the methodology overview. --Guy Macon (talk) 16:51, 7 February 2013 (UTC)
- I took a look at the list. In detail for the first 200 or so, and then just a few samples. My estimate of useful feedback would be closer to 10%. • • • Peter (Southwood) (talk): 06:57, 6 February 2013 (UTC)
- From the lead, "The use of these [talk] pages, though, has typically been limited to experienced editors who know how to use them." Excuse me? This claim not only biases the introduction to the article but is demonstrably not true - I find comments from new users and unregistered users on talk pages fairly frequently. Their use is hardly limited to "experienced editors". – Philosopher Let us reason together. 05:27, 6 February 2013 (UTC)
- We've had different experiences, then... I've tweaked the introduction slightly based on your comments, though. Ed [talk] [majestic titan] 05:35, 6 February 2013 (UTC)
- If we judge feedback solely by signal-to-noise ratio we do ourselves no favours at all. Charles Matthews (talk) 10:08, 6 February 2013 (UTC)
- Why do you think that? If article feedback allows the junk to sit there undisturbed -- and it does -- then article feedback will also allow prohibited material such as libel, personal details, copyright violations, spam, and violations of our living persons, sockpuppet, and banning policies to sit there undisturbed. --Guy Macon (talk) 11:18, 6 February 2013 (UTC)
- I said "solely". I made a more detailed comment in the RfC itself. By the way, WP:BEANS to your catalogue of ways the feature can be misused. Charles Matthews (talk) 12:46, 6 February 2013 (UTC)
- Why do you think that? If article feedback allows the junk to sit there undisturbed -- and it does -- then article feedback will also allow prohibited material such as libel, personal details, copyright violations, spam, and violations of our living persons, sockpuppet, and banning policies to sit there undisturbed. --Guy Macon (talk) 11:18, 6 February 2013 (UTC)
- Thanks for writing about this RFC; I wouldn't have noticed it otherwise, and it's an important subject. -- phoebe / (talk to me) 22:42, 6 February 2013 (UTC)
- The highest quality article feedback I've seen is most always on the article talk pages. To be useful, feedback generally needs to be longer than the short tweets I generally see from the Feedback Tool. I often find useful comments on talk pages that sit unanswered for months, or even years, before I address the issues raised. So we already have a backlog on talk pages, without increasing it with more chatter from this tool. I don't feel that even 10% of the AFT comments are useful, but I've only looked at these comments on a very limited number of articles. Wbm1058 (talk) 23:37, 7 February 2013 (UTC)
- Well take my comments with a grain of salt, but I'm wondering why there is such a difference in quality of comments between these links. The second seems to find much higher quality comments than the first does. Maybe I just haven't been finding the right articles to view feedback on. Wbm1058 (talk) 15:30, 8 February 2013 (UTC)
Special report: Examining the popularity of Wikipedia articles: catalysts, trends, and applications (9,962 bytes · 💬)
This page has been mentioned by multiple media organizations:
|
This is a fantastic article; thank you for sharing. Jujutacular (talk) 02:37, 6 February 2013 (UTC)
- I agree, fascinating stuff.--ukexpat (talk) 03:22, 6 February 2013 (UTC)
- Indeed, very interesting and comprehensive. Great job! --Waldir talk 03:59, 6 February 2013 (UTC)
- Yes, this is excellent, thanks so much for doing this and writing the article about it. -- phoebe / (talk to me) 04:17, 6 February 2013 (UTC)
- Yes, completely agreed with all the previous comments. (And, for those interested in the topic of high-profile events leading to massive page view spikes pre-2010, there's some coverage here, here, here, and here.) --MZMcBride (talk) 06:19, 6 February 2013 (UTC)
- Some more notes on 2008 election traffic. I didn't get hourly figures for Obama, but I suspect in the low hundreds of thousands per hour - so just off our list above. Andrew Gray (talk) 07:31, 6 February 2013 (UTC)
- I expect the most viewed article ever will come from a celebrity who dies while playing the Super Bowl half time show ... but seriously, very interesting article. Great work. MasterOfHisOwnDomain (talk) 11:55, 6 February 2013 (UTC)
- Some more notes on 2008 election traffic. I didn't get hourly figures for Obama, but I suspect in the low hundreds of thousands per hour - so just off our list above. Andrew Gray (talk) 07:31, 6 February 2013 (UTC)
- A significant reason we write the encyclopedia is for people to read it. Wikipedia:Did you know/Statistics provides some sense of what readers look for on the Main Page. However, the analysis provide above by West.andrew.g and Milowent is exactly what we need to get a better sense of what our readers desire on a larger scale as well as a sense of how the encyclopedia articles are being used. Great job! - Uzma Gamal (talk) 13:02, 6 February 2013 (UTC)
Not only is this a great article, it supplies important information -- not just for Wikipedia, but for the marketing world in general. Since most people don't know about the Signpost, I highly recommend posting about this article on marketing and social media sites - tweet it up. -- kosboot (talk) 14:26, 6 February 2013 (UTC)
- Like others above, I welcome the coverage of viewing statistics, which is an area greatly neglected in most wiki-discourse. But I am always less interested in the very top of the charts than the middle and bottom, and more on this in the future would be really great. A few weeks ago I posed a question on the technical pumps, asking how we can generalize about the number or proportion of crawler bot hits in the article stats which, unlike everything else on the page, received no response at all. Yet this is a key question for much current editing, which overwhelmingly concentrates on long tail article with low viewing figures. Also, what are we able to say about how long average "readers" spend on an article, and how much they actually read? I haven't a clue, beyond the overall average figure of a few seconds (which I can't seem to find now). Johnbod (talk) 16:24, 6 February 2013 (UTC)
- Very good work! A pity that this came out now and that it started in 2010; if it had started in 2009, it would have been able to account for stories about the death of Michael Jackson, and if it had only come a few days later, it would have been able to include the massive spike for hits on Richard III of England, which typically got a few thousand hits daily until Monday, when it got about 800,000, or almost 25× the number of hits for that day's featured article. I'll look forward to future studies! Nyttend (talk) 02:42, 7 February 2013 (UTC)
- Richard III will certainly be far up the next WP:TOP25, look for it on Monday.--Milowent • hasspoken 03:17, 7 February 2013 (UTC)
- Because I read the story too fast, your comment was my first clue to this page, and I'd never seen WP:5000 before, either. Quite intriguing! Nyttend (talk) 04:33, 7 February 2013 (UTC)
- Richard came in at #2, see WP:TOP25, beat by a Google Doogle of Mary Leakey.--Milowent • hasspoken 05:28, 11 February 2013 (UTC)
Really great article about the power of WP. The effect that WP has had on our world is huge, but unfortunately largely unmeasurable on the individuals' side of things. I thought that the readers here, may likewise enjoy a piece of research that I recently read (that cites WP as an example), that I feel is very fascinating in how it describes the power behind phenomena like WP. It's called "The Theory of Crowd Capital" and you can download it here if you're interested: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2193115 Enjoy! — Preceding unsigned comment added by 24.85.85.220 (talk) 01:24, 7 February 2013 (UTC)
- Excellent work on this piece. We need more stuff like this in the Signpost. Carrite (talk) 04:14, 8 February 2013 (UTC)
- Nobody forget we've already had an article like this two years ago, but that was a bit less detailed in the analysis. Rcsprinter (chat) @ 23:03, 9 February 2013 (UTC)
Nice work. I'm reminded of this page: Wikipedia:Short popular vital articles.
- Would be great if research can lead to suggestions for software features. A sparkline on the margin perhaps and perhaps indicators of articles in one's watchlist that are showing a viewership peaking. Shyamal (talk) 05:44, 11 February 2013 (UTC)
- FYI, this article is the basis for a Wikimania 2013 submission of the same title. Thanks (as proposer), West.andrew.g (talk) 21:43, 27 April 2013 (UTC)
Azerbaijani places
- Thanks for starting that AfD on Gasaneri, I presume due to the cool "Ә"s in your name you know your stuff. There are probably more stubs like that for Azerbaijani locations - is there official census information for each rayon that could help us improve our coverage? Regards.--Milowent • hasspoken 13:03, 4 October 2014 (UTC)
- Hello my friend. Yes there are a lot of villages which abolished a lot of years ago, but today all of them in English and Bahasa wikipedias. I will start to work on them. You also can help me to delete them because I am not an adminstirator and I do not have permission to delete them. If you are adminstirator or you have a friend who is adminstirator help delete them. Thanks you for attention.--Nəcməddin Kəbirli (talk) 13:19, 4 October 2014 (UTC)
Technology report: Wikidata team targets English Wikipedia deployment (6,691 bytes · 💬)
- The specification for Tools Labs is freshly improved and ready for more prioritization, adding Bugzilla links, etc. (labs-l post). Sumana Harihareswara, Wikimedia Foundation Engineering Community Manager (talk) 13:07, 4 February 2013 (UTC)
- Thanks for the report. For anyone who knows: what exactly does Wikidata "going live" entail? Does it mean we will start taking out local interwiki links? If that happens, what does it look like? Or just that it will be possible, or what? How have the other wikidata deployments gone? Thanks, -- phoebe / (talk to me) 01:21, 6 February 2013 (UTC)
- The other deployments have been relatively successful (some bug fixes, but nothing that meant they had to be revert AFAIK). One problem that they are having, however, is that they can't really start removing wikilinks (even if they wanted to) out of fear that a Wikidata-ignorant (but also non-API?) bot will just re-add them. (Bots aside, yes, you can simply take an interwiki link that is also on Wikidata out.) en.wp is still discussing whether it wants to allow edits that only remove them, with the probable conclusion that only articles with very many wikilinks (more than 50) should be pro-actively stripped of them. All others can be removed while making other edits (e.g. through AWB). - Jarry1250 [Deliberation needed] 10:43, 6 February 2013 (UTC)
- You can start taking out local links if you want, yes. This decision is up to the local wiki's community. A more detailed explanation is in this blog post. The three previous deployments have gone well overall. --Lydia Pintscher (WMDE) (talk) 10:45, 6 February 2013 (UTC)
- Thanks. I'm especially curious what it looks like when you do take the interwiki links out -- is there an easy way to be directed to Wikidata from the local article to edit the language links if you notice a mistake? I did follow the first hungarian deployment but didn't figure this out. -- phoebe / (talk to me) 22:39, 6 February 2013 (UTC)
- Yes there is a link at the bottom of the list of language links saying "edit links" that leads you to the corresponding page on Wikikdata to edit them. --Lydia Pintscher (WMDE) (talk) 22:43, 6 February 2013 (UTC)
- Is there an Idiot's Guide to Wikidata somewhere that explains, with syntax and rendered examples, exactly how it is used in Wikipedia articles? Thanks. --ukexpat (talk) 04:47, 6 February 2013 (UTC)
- Sorry, there isn't. You have to see Wikidata as an entity like Wikimedia Commons; a separate Wikimedia project with its own set of admins and international people who form its core community. You can browse the help pages, but it may help to install the gadget that lets you view the items associated with articles (or just click on the Hungarian links in widely translated articles and take a look at how the interwiki column is set up now - with both manual and Wikidata links). Jane (talk) 08:37, 6 February 2013 (UTC)
- Phase 1 or phase 2? Phase 1 is just magic (for reading) and a little helper dialog box in the interwiki section that edits Wikidata for you without you leaving the page (for writing) IIRC. The syntax for phase 2 has changed a bit recently, but it'll certainly be some sort of parser function {{#property:...}}, though a community might want to wrap that in a template, depending on how the community decides to use Wikidata data on its pages. HTH, - Jarry1250 [Deliberation needed] 10:43, 6 February 2013 (UTC)
- Language links will automatically show up in the article's sidebar. No special syntax is needed anymore. Any data beyond language links can't be used on the Wikipedias yet. That will still take a bit. All in all the comparison to Commons is a good one. Think of it as becoming the Commons for stuff that is in infoboxes now. --Lydia Pintscher (WMDE) (talk) 10:45, 6 February 2013 (UTC)
- Considering we are on the English Wikipedia right now, I think it's reasonable to interpret Ukexpat's request as pertaining to how Wikidata data is planned to be used on the English Wikipedia. It is akin to asking "what is the image policy on en.wikipedia?" rather than "how do I upload files to Commons?" Powers T 14:52, 6 February 2013 (UTC)
- For the first roll-out here you'll not have to do anything for the links to show up. For a large number of pages these links have already been collected on wikidata.org. Existing links in the wikitext will continue to work and show up. In addition you'll be able to suppress language links coming from Wikidata for specific articles by using the noexternallanglinks magic word. --Lydia Pintscher (WMDE) (talk) 15:03, 6 February 2013 (UTC)
- @ Powers - I believe the Wikidata "data policy" still needs to be decided upon here on the English Wikipedia, but the current RFC is out there just as a heads-up to Wikipedians and any bots they may have. I think the basic idea is that Wikidata, like Wikimedia Commons, will grow and morph into it's own identity, while local Wikipedians here will be able to use its services or not as they see fit. The first service offered is interwiki links, so it makes sense to look at that issue first. As an aside, though I am an avid user of Wikimedia Commons images for English Wikipedia articles, I have never bothered to read our local "image policy" and if someone asked me about it I would have assumed that it was for local images, not images from Wikimedia Commons. Jane (talk) 13:09, 7 February 2013 (UTC)
WikiProject report: Land of the Midnight Sun – WikiProject Norway (0 bytes · 💬)
Wikipedia talk:Wikipedia Signpost/2013-02-04/WikiProject report