Artificial Intelligence (AI) has been widely praised for the automation it can bring to industries, from agriculture to logistics. This praise has also come with criticisms, one being that machines will take over human jobs. A majority of complaints like these are exaggerated; the fact is AI has some profound limits.
Can’t “Think” for Itself
Machine learning, a subset of AI, is just what it sounds like, teaching a machine to learn. This works by giving a machine inputs and teaching it how to respond to each piece of information. Because of this, the machine is limited in what it learns to respond to.
The consequences of this are seen in automated driving failures; if there were to be a stop sign covered in graffiti, the cameras on the car wouldn’t recognize it. This flaw can cause accidents or traffic violations and is one of the most significant limiting factors to automation.
Ethical Questions
AI ethics has become a new and increasingly common critique of the technology. Some of the most significant complaints include bias, privacy, and access. Since the technology learns from inputs from human sources, bias can often leak into the machine’s decision-making. Discriminatory AI has even been shown in a real-life trial of the technology.
A study released by the US Department of Justice found that facial recognition software is up to 100 times more likely to misidentify people of color, which has led to wrongful arrests and interrogations. The program is also less accurate for women, children, and the elderly. Another study from UC Berkley found that human bias that causes higher mortgage interest rates for black and Latino borrowers is also seen as algorithmic bias.
AI bias is a massive problem that needs to be identified and fixed to increase the accuracy of the outcomes of the machine and for an equitable world aided by this technology.
We Don’t Know How AI Makes Decisions
Here’s what we know about the decision-making process in machine learning:
- The machine collects the data
- It learns patterns in the data
- It uses these patterns to make decisions
The drawback is that we don’t always know how the machine comes to any conclusions; the underlying AI is typically a black box. Because of its nature, a machine can’t explain how it got to the decision it makes. This can be problematic in sensitive scenarios, like governance or high-stakes business ventures.
If you are interested in a career in technology, visit https://techoneit.com/careers/, or if you are new to the industry, consider joining us as an apprentice. Click here to learn more: www.techoneit.com/technology-apprenticeship-program/
Offices
Phoenix
New York
Dallas
Hyderabad