Microsoft boss hammers home 'unintended consequences' of AI

Satya Nadella implored the industry to be leaders in setting the standards around privacy and ethics in AI

It is the responsibility of everybody in the tech industry to come together and set the ethical standards around privacy and artificial intelligence (AI), according to Microsoft CEO Satya Nadella.

In his keynote address at the vendor's Future Decoded event in London, the CEO cited numerous examples of how AI is being implemented by customers, including use cases with Microsoft's HoloLens headset.

"I'm inspired by how quickly technology is being diffused, demystified and democratised," Nadella (pictured) said.

"But we, as an industry, have to mature to confront some of the unintended consequences of all these developments."

Nadella highlighted three challenges that come with the current rate of technological development: protecting privacy, cybersecurity, and making AI available to the masses.

Nadella also told the crowd that GDPR has been affective in helping to protect privacy, adding that Microsoft has taken it as a blueprint and applied it to its own global policies.

"We don't even think of it as a European regulation, but something that sets the bar for how people should think about rights," the chief exec explained.

"One thing that is secular going forward is the price of human rights. We will all have to think about the individual experiences we create to really treat privacy as a human right."

Nadella claimed that Microsoft sees 6.5 trillion security signals every day and told the crowd that it has been using this data to improve its security functions in order to run its operational security posture into its products.

The CEO also used the example of NHS Scotland's migration to Windows 10 as a subtle hint to nudge its fellow trusts and other organisations to move away from using Windows 7.

"We saw with WannaCry that the ability to detect and mitigate for any malware that enters, has to be changed because of the operational security posture to be translated into products that can be used by any trust anywhere," he stated.

"Now that NHS Scotland are moving to Windows 10, which has Defender, they have that ability to traverse the signal from where they see it to where they put the attacks."

As part of its efforts to establish an ethical code around cybersecurity, Microsoft has initiated a Digital Geneva Convention, which Nadella said many companies have already signed up to. However, he warned that to deter cyberattacks, nation-states need ot be part of the conversation.

"We hope that many nation states sign up because the damage that cybersecurity causes affects citizens and small business the most, so we need to use our collective power to protect from these incidents, and nation-states need to be part of that," he said.

Nadella's third major unintended consequence of AI is the fundamentals of the technology itself - the language used to train the model and the subconscious bias that may influence its development.

"The challenge of that is the biases that are there in our language will be picked up by the AI model," he said.

"How do we de-bias the word embedding that gets built into a backops model? This is one of the things that we have to do, as we are democratising AI creation, we are also lifting the practice around AI creation so that we can deploy models that are ethical and transparent.

"AI is only as good as the data on which it trains, if it's being trained for one purpose but being used for another - that is unethical.

"As creators of technology it is our responsibility to set the standard and continue to bring that sophistication to tackling the unintended consequences of AI."