This is an extension of a twitter thread I wrote in response to this tweet, and thread about the Cambridge Analytica revelations.
One of the key modern problems is how easy it is to access these tools. You don’t need professional training to string these together.
It’s as dangerous as if someone invented a weapon that could kill 10s or 100s of people, light enough to carry anywhere, and available in any store, without training. And expecting owners to police themselves.
People are terrified of AI. We know we don’t need AI to disable hospitals. We don’t need AI to intercept Facebook logins (although FireSheep and the pineapple are less effective now). We don’t need AI to send a drone into a crowded market.
Make a website the only place for government applications, such as medicare or millennials railcards and it’s easy to remove access for all citizens.
But combine all that with data and you can fuck up someone’s life without trying. You can give 2 people the same national insurance number or other id. You can flag them on the no fly list.
The algorithm is God. The algorithm is infallible. Trust the algorithm.
Even when it tells you someone is more capable than the humans says she is, and punishes them.
(unless you’re under GDPR where you have the right to question the algorithm)
But tell anyone that people will use data for purposes they hadn’t considered (like using RIPA anti-terror legislation to see if someone’s in the school catchment area) then you’re paranoid.
Be paranoid. People will always stick crowbars in the seams. Whatever your worst case scenario for your code is, you’re probably not even close.
You can see my original tweet, and the repies, here:
craignicol@craignicolOne of the key modern problems is how easy it is to access these tools. You don't need professional training to string these together. twitter.com/timoreilly/sta…18:44 PM - 19 Mar 2018Tim O'Reilly @timoreillyWow. This thread is powerful and important. And computer science is having that moment of reckoning now. Facebook and Cambridge Analytica is only the tip of the iceberg. https://t.co/6C0Kw1Dt74
The Guardian has a great interview on AI, existential threats and ethics on their podcast here.