IBM Provides AI Customer Protection: New Trend in Tools

Update: November 22, 2023 Tags:ecoelicltnectechnology

Recently, IBM announced that it will be offering intellectual property protections developed by its AI models, providing a layer of security and safety for those needing AI solutions. What challenges is AI now facing, why is IBM offering IP protection, and what does this mean for the future of AI solutions?

What challenges is AI now facing?

The rapid growth that AI has seen over the past few years has been undeniably jaw-dropping, with businesses left right and centre all trying to get onto the AI bandwagon. In fact, the speed at which AI is being adopted by businesses is so great that companies which make even the smallest reference to AI get catapulted in search engines and reported on in the news.

However, just as anything popular attracts positive attention, it can also attract negative attention. For example, some users have exploited AI to create deepfake content on individuals. In many cases, these deepfakes are for the sake of humour and are obvious, but in other cases, they are used to make harmful content that are almost indistinguishable from real-life.

Another example of a major challenge faced with AI is the concerns of data privacy, and how AI store private data. For example, most personal devices connected to the internet (such as smartphones) are undoubtedly being used to collect private data and train AI models.

While there is nothing to stop companies doing this type of data collection (as they will often include it in their terms of service), they are obliged to safely store this data by law. Of course, no software system is immune to cyberattacks, and it will only be a matter of time before an AI service is attacked on a large scale, exposing all the private data that has been gathered.

But one major challenge that AI is now starting to face is copyright and IP infringement. Simply put, many large AI models (such as ChatGPT) scrape the internet for publicly accessible data and use this data to train from. However, according to IP holders, even though this data is accessible to people, using it to create an AI model that is then used in a commercial aspect may be in violation of the law. A

As such, numerous AI companies are now facing lawsuits, which could very well cripple the AI industry as we know it. Worse, content generated by such engines could also be in violation of the law, and thus could see customers’ AI solutions also affected.

IBM to offer AI IP protection to customers

Recognising the challenges faced with AI and the numerous upcoming lawsuits, IBM has recently announced that it will soon be offering IP protection to customers of its AI solutions. Simply put, AI offerings by IBM are first trained on IBM datasets, and then customers can utilise their own datasets to finetune the model.

But customers have often asked about the data that IBM uses, and whether they can be sure of the integrity of the data being used. For example, customers in the healthcare industry need to be sure that the data being used to train models is not only accurate, but also stripped of personal data that may link back to individuals, and has been taken with consent from the original data source.

Thus, IBM has announced that it will be providing guarantees on the data that it provides to train models, essentially certifying what it generates. At the same time, customers will receive protections in this data, such that IBM will take responsibility for the models that it trains prior to using data provided by individual customers.

At the same time, Microsoft also announced that it is offering legal protection for users of its co-pilot AI software solution. Simply put, should customers of co-pilot be sued for the content generated by co-pilot, Microsoft will not only step in to defend customers, but make payouts as needed (this includes tools used in Microsoft 365, Bing, and GitHub developer tools).

What does this mean for the future of AI solutions?

The fact that AI models require massive amounts of databases to train from means that there are few entities around the world who have the capability to create complex models (such as Microsoft, Google, and Facebook). As these companies own the data that they store, they are unlikely to face legal challenges over training models, but smaller companies (such as OpenAI) that rely on publicly data are certainly likely to feel the heat of the courtroom in the coming years.

Going forward, what this means is that providers of generative AI will need to be extremely cautious about where their data comes from, and this could hinder AI development. Of course, there is nothing stopping a company from developing an internal AI using copyrighted data, but the results of this AI could never be shared outside of the organisation, nor contribute to commercial development.

However, if only a few companies have the ability to create complex AI due to data ownership, that could be seen as being an unfair monopoly on data, thus, triggering all kinds of legislative changes. For example, it may be required that, by law, data collection is not allowed to be used in AI.

Where AI goes from here is not exactly clear, and while the technology will not be banned, it will certainly have to jump a few hurdles.