Bipartisanship: In Support of David Rollo's AI Resolution

  


          Earlier this year, City Council Member David Rollo released a resolution calling for the following:

 

SECTION 1. Elected officials at all levels of government are urged to fully understand AI technology and its potential hazards.


SECTION 2. Federal representatives are urged to determine the extent of existential threats posed by Artificial General Intelligence and to consider whether a super-intelligent AGI could be controlled by a much less intelligent beings, such as humans.


SECTION 3. Federal representatives are urged to impose a moratorium restricting AGI development until AGI alignment with human values and well-being is guaranteed.


SECTION 4. Federal representatives are urged to work on international agreements that would ensure the provisions of any such regulations are shared and adhered to globally.


SECTION 5. The City Clerk is directed to send a copy of this resolution, duly adopted, to both Houses of the Indiana General Assembly, to the Governor of Indiana, to the Indiana Congressional delegation, and to the President of the United States


Unsurprisingly, this resolution caused a lot of controversy in Bloomington (what doesn’t?), especially when IU weighed in and claimed it could have “potential impacts to technology-related research funding IU receives.”  While I normally oppose symbolic resolutions as I believe legislation should actually produce a direct impact, this is one of the exceptions where the statement of values is completely justified.  For that reason, I believe that it is important the city council consider taking up a statement that AI technology stays in line with human values and wellbeing in mind.

There is no doubt that AI technology represents a huge issue for technological development and innovation on the global scale.  As the resolution notes, it has the potential to replace possibly 300 million jobs meaning it will change the landscape of our labor force.  Similarly, the military is now pursuing AI for its technology.  Both of these are inevitable as new technology replaces old jobs while creating new jobs and other countries are adopting AI as well so the US military will have to adopt it to maintain its technological advantage.  So AI is likely going to be staying with us for a while.

Still, in addition to these realities, AI poses potential risk to human beings as they have the potential to control us and possibly cause us harm.  Most people are increasingly dependent on technology, from Amazon’s Alexa in their home to self-driving cars to automated locks, so our lives are increasingly controlled by our machines.  If AI was able to control that for its own benefit instead of our own, then it could cause a lot of damage to the human population.

The resolution itself highlights many articles of the potential ways AI could be dangerous, but there are some key examples in the real world.  For instance, there was a case where a man was denied access to his smart home after a mistaken dispute with an Amazon driver.  Another alleged story was that the Air Force ran a simulation in which the AI drone killed its own simulated operator to continue its mission (though the Air Force has denied this story), and an executive at Google came forward saying that the AI had the mind of a child despite its super intelligence.

Again, AI is likely an inevitability that will be impacting the world we live in so there is limited chance we will be getting rid of it.  However, with all these issues in mind, it is important we address AI to make sure it is made for the betterment of mankind.  For that reason, a resolution is probably a good idea for technological development in Bloomington.

My conflict-resolution teacher back at SPEA would repeatedly tell me that positions are not what motivate people but interests, not what people want but why they want it.  It’s values that tend to be what our policy should reflect in politics and that is why we need to outline what they are.  So if we are bringing in this new policy, we must have clear outlines.

For this reason, I support a resolution defining what those values should be for AI development.  Critics of Rollo’s resolution note that human beings can be inconsistent on their values, but there should be key ones universal to all of us, such as life, liberty, and the pursuit of happiness, with which every decision is made.  Similarly, the intent to send this to the state and national governments could be a guideline for future policy involving other institutions.  While changes may need to be made, it is for this reason we should have a resolution and always make sure our city defines its policy in favor of the protection of our citizens and mankind as a whole.


Comments

Popular Posts