Apple’s Bold Move: Embracing White House AI Safeguards
The White House announced on July 26 that Apple, along with more than a dozen other tech companies, has agreed to follow new AI safety rules that are meant to lower the risks that come with AI.
The White House said on July 26 that Apple has joined more than a dozen other tech companies in agreeing to follow safety measures meant to lower the risks of AI.
A year ago, Amazon, Google, Microsoft, and OpenAI were among the competitors in the AI field who helped US officials announce the voluntary pact. At the time, the administration of President Joe Biden said that it had pledged to “help move towards the safe, secure, and transparent development of AI technology.”
Tech companies protect AI models by simulating attacks on them through a testing method known as “red teaming.” This shows any flaws or vulnerabilities.
According to the White House, testing AI models or systems should incorporate threats to society and national security, such as cyberattacks and the development of biological weapons. The promised companies will share AI threats and defense-evasion attempts with the government and each other.
Apple showed off “Apple Intelligence,” a set of AI features for its popular devices, in June. The company wants to reassure users that it is not behind the times when it comes to AI. Apple announced a partnership with OpenAI that would let iPhone users get ChatGPT if they asked for it.
Biden’s 2023 AI Safety Order
At the end of 2023, Mr. Biden signed an order that made new safety rules for AI systems and told developers they had to give the US government the results of any safety tests they did. The White House said that Mr. Biden’s order was “the most comprehensive step ever taken to protect Americans from the possible risks of AI systems.”
Not long after it came out, Vice President Kamala Harris gave a major speech on AI policy to a group of politicians, tech industry leaders, and academics. The event was mostly about growing concerns about the effects of advanced AI models, which have led to everything from losing jobs and cyberattacks to losing control of the systems humans built.