Should artificial intelligence have freedom of choice?

Should artificial intelligence have freedom of choice?

Artificial intelligence systems are starting to think “like humans” rather than just calculating potential options, but might their full exploitation trigger some liability risks?

Artificial intelligence can “think” like humans

As anticipatedIoTItaly, the Italian Association on the Internet of Things of which I am one of the founders, ran an event in collaboration with STMicroelectronics named “Creativity and technology at the time of Industry 4.0” on 30 May 2017.

There were a number of panels during the event, but I participated to a very interesting discussion about artificial intelligence and the topic quickly focused on whether

  • either artificial intelligence should just accumulate knowledge and on the basis of such knowledge enable assessments that would be impossible to humans;
  • or it can go beyond a logical reasoning and adopt decisions that are more “intuitive“.

I found fascinating the video below that tries to explain Google’s DeepMind system.

As mentioned in the video, the “symbolic” event which is considered the moment when machines started to be “intuitive” is the victory of the AlphaGo artificial intelligence system against a master of the ancient Chinese game Go.

DeepMind is the evolution of such approach. Indeed, it is defined on its website as

DeepMind is the world leader in artificial intelligence research and its application for positive impact. We’re on a scientific mission to push the boundaries of AI, developing programs that can learn to solve any complex problem without needing to be taught how.

AI does no longer receives instructions, but it learns itself how to do things and starts thinking like humans. This is really impressive since the level of understanding of situations that can be reached by artificial intelligence systems is limitless and the gap between humans and machines is disappearing.

What legal issues may arise from AI’s free decisions?

If artificial intelligence systems are left free to reach their own decisions, a number of “unexpected” new legal issue come up:

1. Who is liable for artificial intelligence systems

The Legal Affairs Committee of the European Parliament approved a report calling the EU Commission for the introduction of a set of rules on robotics. The Committee is for the introduction of strict liability rulesfor damages caused by requiring only proof that damage has occurred and the establishment of a causal link between the harmful behaviour of the robot and the damage suffered by the injured party.

But what happens in case of systems like Google DeepMind that are not instructed to do some activities, but just do them? Compulsory insurance schemes might be the solution, but this would add a further lawyer of costs limiting the growth of these technologies.

2. Did artificial intelligence act ethically?

This is another topic touched during the IoTItaly event. The best choice taken by the machine might not be the most ethical choice. Does it mean that artificial intelligence cannot be left totally free to take its decisions?

As discussed in this blog post, some companies are already establishing ethical committees to define ethical principles to be imposed on machines. This might mean that artificial intelligence systems might

  • learn to also act ethically;
  • have its decisions reassessed by a human as it happens in the medical sector; or
  • be used in a fully unleashed manner only in contexts where no harm to humans might be caused.

3. Are you able to justify the decision of artificial intelligence?

Under the terms of the European General Data Protection Regulation, individuals shall be able to object to decisions taken by automated systems that in case of health related data can be used only with their consent.

What will happen if the manual reassessment of a situation is not able to achieve a full understanding of the reasoning performed by the machine?

This are just some heads-up on a topic that is definitely fascinating.

If you found this article interesting, please share it on your favourite social media!

@GiulioCoraggio

Follow me on LinkedIn – Facebook Page – Twitter – Telegram – YouTube  Google+

How outsourcing changes with IoT and Artificial Intelligence

How outsourcing changes with IoT and Artificial Intelligence

Outsourcing agreements might considerably change with the usage of IoT and artificial intelligence technologies.

The battle on liability clauses of outsourcing agreements

A few years ago I published a blog post on liability clauses in outsourcing agreements, defining their negotiation as the “battle“. And indeed, according to my experience, negotiations on such clauses as well as on service levels and the liquidates damages/penaltiestriggered by their breach take almost half of the time of a whole contractual negotiation.

The position of the parties is that

  • the supplier cannot accept a liability cap that is excessively high since otherwise the agreement would represent a disproportionate risk for its business, if compared to the price received for the services, while
  • the entity receiving the services does not want contractual limitations in case of suffered damages and wants to be able to quickly recover the suffered damages.

The matter is somehow “facilitated” in countries like Italy where limitations of liability for cases of gross negligence and wilful misconduct are null and void. This means that there is not even scope for negotiations on these scenarios since any restriction to claim damages under such circumstances would not be valid.

How the battle changes with the IoT and artificial intelligence

The IoT and artificial intelligence are able by definition to predict any malfunctioning and either avoid their occurrence or limit the negative consequences on the business of their occurrence.

Sensors embedded in industrial plants can provide a clear picture at any time of status of the machines and in some cases of the whole line of production, ensuring that necessary maintenance activities are performed before a negative event takes place. At the same time, artificial intelligence systems, but also machine learning technologies, are able to have a much better understanding of potential forthcoming downtimes and of the measures to be adopted to prevent them from happening.

But, if despite of the above technologies a malfunctioning takes place, there is a risk that very large damages occur since it means that a massive incident happened.

The above means that

  • service levels might become considerably higher than those currently agreed because the likelihood of occurrence of a downtime will be much lower and in case of occurrence of a downtime the artificial intelligence system will be able to immediately identify the most appropriate remedy; while
  • liability caps might also become higher since if a malfunctioning takes place, much larger damages are expected to be generated.

It is likely that the scenario above will happen in a medium/long term since it requires that Internet of Things and artificial intelligence technologies become the backbone of provided technologies. At the same time, there might be a “transitional” phase when still suppliers will not be able to justify to their insurers and shareholders the reason why high liability caps can be accepted because of the employed technologies.

What is your view on the above? I would be happy to discuss, and if you found this article interesting please share it on your favourite social media.

@GiulioCoraggio

Follow me on LinkedIn – Facebook Page – Twitter – Telegram – YouTube  Google+

Top 5 Internet of Things predictions for 2017

Top 5 Internet of Things predictions for 2017

The Internet of Things experienced a massive acceleration in 2016, but what are the predictions for 2017? What should we expect?

After the success of the 2015 and 2016 predictions on the IoT, below are my personal top 5 predictions on the legal issues that will affect the Internet of Things in 2017.

1. The Internet of Things is not just a technology, but will change the models of business

I have already discussed about it in several instances. The general understanding is that Internet of Things technologies just rely on sensors which can lead to predictive maintenance and additional efficiency. However, this is only part of the picture. It is happening a major shift from a model of business based on the provision of products to

  1. a model of business based on the offering of services and
  2. in case of B2B transactions relating especially to Industry 4.0 technologies to a profit sharing approach.

This shift has considerable legal consequences. Indeed, sensors enable to obtain a very large number of information about customers not only in terms of personal data, but even of trade secrets and confidential information, leading to new legal issues (previously never experienced) on, among others, data protection, intellectual property, cyber security and product liability.

2. Banks and insurance companies will adopt Internet of Things technologies to survive

Connectivity, telematics and digitalisation are not an option for banks and insurance companies. If they want to “survive“, they will have to innovate and – according to estimates – do it fast. FinTech and InsuranceTech are on the agenda of all these companies, but they require also an expeditious change in the approach to the business by the whole company.

But, as I mentioned in a previous blog post, “you cannot do I(o)T alone“. The Internet of Things requires the setting up of partnerships which need to enable interoperabilities between technologies of different suppliers. This might lead to major cyber security issuesthat shall be handled by means of appropriate technical and legal measures such as the implementation of a cyber security policy in order to test products and a cyber risk procedure to react to cyber attacks as well as through the implementation of a privacy by design approach and the performance of privacy impact assessment.

Also, when FinTech and InsuranceTech meet the IoT, new legal issues arise as outlined in this post. These issues are often addressed very late by banks and insurance companies, even because they put their legal department out of their “comfort zone“. This is why both the management and the legal department of those companies need to evangelised about the new legal problems deriving from these technologies.

3. Privacy by design will protect IoT businesses

The EU General Data Protection Regulation (GDPR) poses considerable new risks on Internet of Things technologies especially in the current uncertainty as to the allocation of the responsibilities between the different parties involved and the regulatory obligations. At the same time, as showed by the recent cyber attacks that exploited IoT technologies, it is not possible to be 100% protected from potential cyber risks.

The matter cannot be underestimated given the potential fines provided by the GDPR. Also, the new principle of “accountability” prescribed by the EU Privacy Regulation places the burden of proving compliance with the regulation on the investigated party, leading to what is commonly known as “probatio diabolica” (evidence of the evil…).

The implementation of a privacy by design approach, accompanied by the performance of a privacy impact assessment, enables companies to prove the adoption of whatever was required by applicable data protection laws putting businesses in a much safer position. However, their implementation requires a continuous review in order to be a valid defence. This review shall follow not only the launch of new services and functionalities, but also the development of technologies and security requirements.

And the matter is even more complex in the case of usage of artificial intelligence technologies that will pose not only data protection and liability issues, but also new ethical issues.

4. Industry 4.0 technologies will lead to a battle on data ownership

Companies are reaching a higher level of awareness as to the value of data. This is relevant when it comes to personal data for which it is necessary to identify techniques aimed at preserving their value for the business collecting it enabling at the same time to ensure privacy compliance.

But the matter is becoming exponentially prominent when it comes to industrial data generated by Industrial Internet of Things technologies. Suppliers and exploiters of Internet of Things are assessing the best placed legal basis to protect its data. Long negotiations are expected on who is the owner of data generated by the usage of Industry 4.0 technologies. Is it more relevant to keep control on data or to have it aggregated to big data in order to ultimately gain a better service?

The above is happening during a period when European regulators are planning to expressly expand data protection and copyright regulations in order to cover that generated/collected by IoT technologies.

5. Blockchain is a resource for the IoT, but the market is still hesitant

The blockchain technology is very useful for the exploitation of IoT devices as outlined in this article. But, also because of some negative publicity around Bitcoin, there are still considerable concerns about its usage.

Companies might not be able to afford risks associated to a technology which might get out of control of its exploiters leading to issues as to the allocation of the relevant responsibilities. However, the adoption of “closed” blockchains might vanish the high level of security ensured by an open blockchain. I wonder whether the right balance will be identified in 2017.

If you found this article interesting, please share it on your favourite social media!

@GiulioCoraggio

Follow me on LinkedIn – Facebook Page – Twitter – TelegramYouTube –  Google+