Headlines frequently feature news about advancements in artificial intelligence (AI). These developments collectively boost public interest in AI and help people imagine what’s possible. But some individuals are concerned about what could happen if information about AI research falls into the wrong hands.
Hackers Are Typically One Step Ahead
The increase in cybersecurity attacks has caused companies to become more committed than ever to keeping hackers out, and that’s encouraging. Although, cybersecurity experts often bring up how cybercriminals stay ahead of their targets and plan attacks the eventual victims hadn’t even thought possible.
Theoretically, people interested in using AI research to harm members of the public could take a similar approach by poring over the details of it and envisioning attack strategies motivated by what they discover. If AI research were classified, it’d be more difficult — although not impossible — for hackers to learn from it.
Some AI Developers May Have Dishonest Intentions
While it’s safe to say most AI developers carry out research in hopes of helping humanity or at least not causing distress, there are undoubtedly some people who are developing AI technology to orchestrate revenge or destruction toward a person, demographic group or entire nation.
Dave Coplin, who works with AI as part of Microsoft’s United Kingdom division, has brought up how AI will change how humans relate to one another. He mentioned people need to start paying attention to the individuals responsible for furthering AI technologies — there’s always a chance that seemingly good-hearted intentions might not be genuine.
One way to reduce the risk might be to treat AI research material like documents related to a country’s security. Government employees must go through elaborate background checks and sometimes psychological exams before being given access to particular areas of a facility or categories of classified documents.
That kind of vetting process and secure storage could reduce bad actors in the AI research sector, too.
The Possibility of Misleading Conversations Could Go Up
For generations, people have explored ways to improve conversations and make better connections. Many forward-thinking businesses depend on AI for lead generation and customer service benefits by using chatbots to answer questions outside of operating hours. There’s even a chatbot that helps English language learners work towards proficiency.
The companies that build chatbots often publicize their progress in blogs and other readily accessible outlets. Representatives often use the news to spur interest in the public and potential customers.
However, popular services are prime targets for hackers who want to earn notoriety for their successful efforts. If chatbot developers continue to spread the word about what their technologies can do, hackers might figure out how to infiltrate the bots and have them make reputation-damaging comments or incorrect information faster than they otherwise might.
There Are Still Many Mysteries Surrounding AI
Researchers have numerous AI-powered tools that complete many tasks better than humans. For example, time-consuming responsibilities such as analyzing images or spotting patterns can be done much faster with AI.
Although scientists have learned a tremendous amount about how AI works, there are still things they don’t — and might not ever — understand. Deep learning, a type of AI that involves technology learning by examples, is especially mysterious. Scientists see results to indicate AI does what it should, but how it operates is not always evident.
Keeping AI information classified could limit the amount of damage someone could do by figuring out previously unknown data and using it to hurt people. Unfortunately, some people believe it could limit something else: progress.
Sandboxing as Another Way to Promote Safety
Researchers studying AI say there’s another way to limit the misuse of AI, known as sandboxing. Already a familiar concept to software developers and cybersecurity specialists, sandboxing involves working with software in a contained environment to limit or prevent issues from affecting the rest of a network.
Artificial general intelligence (AGI) is a type of AI capable of performing most cognitive tasks with a success rate comparable to humans. Experts say it could be part of the AI landscape in as few as 15 years. People theorize ultra-intelligent AGI technologies could fool humans and make them think they’re safe when, in reality, the AI is plotting to hurt to humans once it becomes smarter than them.
Scientists realize sandboxing is not a long-term solution for AGI safety. To clarify, they say it could be useful for testing how AGIs behave with other technologies implemented keep them from overtaking humanity.
Technological advancements necessitate responsibly planning for the future. When that action occurs, the likelihood of the worst — or completely unplanned — outcomes happening goes down.
Latest posts by Kayla Matthews
- Here’s How Machine Learning Will Provide Structure to Unstructured Data - March 29, 2018
- Could Classifying AI Research Prevent Public Harm? - March 16, 2018
- 4 Surprising Industries Capitalizing on the Smart Tech Boom - February 9, 2018