Wikipedia talk:Wikipedia Signpost/2016-09-06/Recent research

Latest comment: 8 years ago by ArnoldReinhold in topic Discuss this story

Discuss this story

No comments on this yet? What a fascinating story. A huge helping of "thank-you" to the authors, with a generous side of gratitude. Well done. 78.26 (spin me / revolutions) 15:37, 7 September 2016 (UTC)Reply

I've got one, but it's going to upset some people. The first story here is about a kind of Turing test ... an attempt to get a machine to successfully mimic the kinds of things a human would write. This kind of research is proceeding at light speed; no one in the field is betting that machines won't be much, much better at it 10 years from now, across a range of applications. This will inevitably have dramatic consequences for Wikipedia. We can expect to see a variety of bad actors, including convincing, machine-generated sockpuppets that promote their master's articles, and show up at community votes, and generally cause mayhem. We can expect to see good actors, who create tools that efficiently do a variety of tasks that have to be done manually now, including tools that fight the bad actors. Most of the machine-users will probably be neither good nor bad; they might just be curious about how the software will work, as these researchers seem to have been, or they might be using these tools in other spheres of their lives, and never stop to think that we might object to use of those tools on Wikipedia. One thing that concerns me: if we yell at every neutral editor and researcher who uses similar tools and tell them we think they're scum (and that happened in this case, a little bit), we might, over time, convert all the neutral actors into bad actors. - Dank (push to talk) 17:09, 7 September 2016 (UTC)Reply
Ever since seeing CGP Grey's "Humans Need Not Apply" video and reading Nick Bostrom's Superintelligence: Paths, Dangers, Strategies upon which the video was based, I have become increasingly concerned with changes in this industry. Those of us that enjoy writing an encyclopedia will not survive for long against AI that will generate a free encyclopedia for those who are only consumers. Clearly as a biased humanities student I have little regard to the professionalism of engineers mucking around in Wikipedia as they selfishly seek to solve a perceived problem without a care for either the human editors or the larger enterprise. To that end, I have no qualms about biting or "profiling" so-called neutral actors. Anyone that's not an encyclopedist is a bad actor, anyway. Chris Troutman (talk) 13:37, 8 September 2016 (UTC)Reply
FWIW, the Quill software mentioned at the 8:54 mark in that video is described at https://www.narrativescience.com/quill. Google has a team headed by Ray Kurzweil that plans to deliver a customizable chatbot by the end of this year. So the threats (and opportunities) already exist to some extent. - Dank (push to talk) 14:14, 8 September 2016 (UTC)Reply
  • Regarding the commentary on the paper by "de Laat", I'm sure it's a well thought out work, but I fail to see why this form of profiling (or as I think of it: filtering) is "eroding the moral order". Editing Wikipedia is not an innate human right, it's a privilege that can be taken away. To me the anti-vandalism tactics are somewhat equivalent to requiring seat belts to drive on the freeway; the police can profile the unbelted drivers, thereby focusing their efforts on (presumably) higher risk targets. How is that eroding the moral order? de Laat's reasoning seems more appropriate for a court of law. Praemonitus (talk) 19:59, 8 September 2016 (UTC)Reply
    • I've long been of the opinion that IP vandals should be permanently banned after three strikes, and more generally that we ought to treat anonymous edits differently from edits by named editors. Bearian (talk) 17:57, 12 September 2016 (UTC)Reply
      • Maybe that would be fair, but is it possible? We only have the technology to block IP addresses, an individual vandal is likely to move on and if we've permanently blocked their former IP address it is no skin of their nose. If anything they have provoked us into permanently disabling editing for future users of that IP address. ϢereSpielChequers 09:31, 25 September 2016 (UTC)Reply
  • Interesting article. I'd have critiqued the IP profiling article differently. Firstly the question of intrusiveness, I can understand drivers getting annoyed if they are pulled over and breathalysed when sober. Antivandalism patrol is more like the highway patrol that looks at all the traffic and then goes after the car that is weaving all over the road or speeding. Unless you get thanked or your edit accepted you don't normally notice the time someone checks your edit and decides it isn't vandalism. Secondly it conflated the divide into IP v registered. In reality the divide is three way, IP, new account, trusted account. The main difference is in the way we treat the regulars as opposed to newbies and IPs. In effect we are like an airport with special light touch express lines for frequent flyers and staff, or a barman who doesn't repeatedly check the age of the regulars - prove you are a trusted known quantity and we will focus our security time elsewhere. If you make the comparison between IP editors and Newbies then I'm not sure the IPs have a case to gripe. In practice an IP vandal will usually get a 31 hour block for something that a newbie would get an indefinite block for. A better analogy for the IP editors and newbies is with office blocks that operate a keyfob system. I wouldn't be surprised if the researchers work in an environment with such a system. If so I challenge them to persuade their University or other workplace to drop special treatment for regulars, no side doors that only work with a fob - everyone gets to use the main entrance and sign in at reception. Anons, such as people wearing full face motorcycle helmets and having forgotten their keyfob, get the same access as everyone else. Such a system might work OK at a public library or in a village on a sparsely populated island, but not in an organisation with hundreds let alone thousands of people and in a big city. ϢereSpielChequers 09:31, 25 September 2016 (UTC)Reply
    • It is common for organizations to issue members identification cards or badges. People who have such IDs are treated differently from visitors who do not. They can enter and leave buildings and areas within buildings without checking in at a front desk. People who do not display a badge in a work place may be politely challenged by employees or security guards ("Can I Help you?"). A library card allows one to take out books. A passport permits border crossing. Cards holders are trusted more by the organization that issues the card. None of this behavior is considered profiling or ethically dubious. Having a Wikipedia account is a form of ID. We can easily contact you if your behavior is inappropriate and block you if your bad behavior persists despite repeated warnings (3 at least). By contrast many IP addresses come from schools or cybercafes where the addresses are shared by multiple users, making warnings and blocks more difficult to deliver. A vandal may come from more than one IP address and can easily evade blocks. Finally vandalism as defined by our policy is clear cut stuff like deleting blocks of text or inserting obscenities or gibberish. Its removal is an unquestionable good. If one goes into a poor neighborhood and quietly picks up broken glass from public playgrounds without attracting any attention or making any fuss about it, would that present ethical problems? Even if the reality is that there is just as much or more broken glass in richer neighborhoods, the playgrounds cleaned up are still better for it.--agr (talk) 01:00, 29 September 2016 (UTC)Reply