Apple has become the latest tech giant to commit to voluntary artificial intelligence safety guidelines set forth by the Biden administration. The move comes after rapid changes in AI technology in the last couple of years, raising concerns about its potential impact on society.
Alongside industry leaders like OpenAI, Amazon, Google, Meta, and Microsoft, Apple has pledged to rigorously test its AI systems for biases, security vulnerabilities, and national security risks. Not only this, but the company will also publicly share the results of these assessments.
“The principles call for companies to transparently share the results of those tests with governments, civil society and academia — and to report any vulnerabilities.”
Bloomberg
The White House has actively pushed for responsible AI development, recognizing its potential benefits and risks.
President Biden signed an executive order last year mandating safety testing for AI systems purchased by the federal government.
But, as of now, the guidelines remain voluntary, and undoubtedly, critics argue that stronger regulations are needed to ensure public safety. Congress has shown interest in AI legislation but has yet to pass significant bills.
Meanwhile, the AI landscape is rapidly evolving. Apple’s partnership with OpenAI, announced at WWDC, to integrate ChatGPT into its iPhone was the talk of the town in the past month or so. This development has not been without controversy, with some people arguing that Apple may be sharing data with OpenAI because they haven’t paid OpenAI a cent.
As AI continues to shape industries and daily life, the balance between innovation and safety remains a critical challenge.
More here.