Use of terms of use of the new Chinese platform divides opinion.
Impacting the Silicon Valley technology market in recent weeks, DeepSeek, a Chinese startup that competes with ChatGPT, OpenAI, among other chatbots, has been the target of controversy regarding privacy and data storage. Taking into account global legislation in the field of Digital Law and Brazilian legislation, the question remains: does the platform really comply with the requirements of the General Data Protection Law (LGPD)?
According to Alexander Coelho, a partner at Godke Advogados and a specialist in Digital Law and Data Protection, it is too early to say precisely whether DeepSeek's terms of use are fully compliant with the LGPD and other international privacy standards, such as the European GDPR. "However, considering Chinese companies' history of limited transparency when it comes to data collection and processing, there are reasons for concern," warns the lawyer.
In the view of Luiz Fernando Plastino, a lawyer at Barcellos Tucunduva Advogados (BTLAW) and a specialist in IT law, the terms of use are very similar to those existing on other platforms, such as ChatGPT. "You need to send the data so that the machine can be asked the questions, so that it can compute what it is going to read to answer," he explains.
One difference that was pointed out is that there is no possibility, in principle, for the user to choose not to send this data to the machine for training. "On the other hand, the model is open source and you can run your own separate instance on your computer or on your server. In this case, if you run your own separate instance, it's even more secure than ChatGPT, because it won't send data out," says Plastino, stressing that, in the case of DeepSeek, it can be run on a local instance, which is more secure for companies.
But just as happened with TikTok, which was the target of investigations and bans in several countries, including the United States, due to suspicions that its data could be accessed by the Chinese government, the warning for DeepSeek is similar, in Coelho's view.
"China has laws that oblige local companies to provide data to the government if requested. If DeepSeek follows the same model of massive data collection, its popularity could become a strategic tool for Beijing, especially in AI training based on global human interactions," warns the expert.
Finally, the experts analyze whether the ANPD (National Data Protection Authority) will have to take a stricter stance on this and other AI platforms. "Brazil will need to be more proactive in regulating and supervising new technologies, especially when they involve artificial intelligence and the processing of large volumes of personal data. The ANPD should monitor DeepSeek to assess whether the platform follows the principles of the LGPD, especially with regard to user consent, minimizing data collection and restrictions on international sharing," argues Coelho.
Plastino, on the other hand, believes that there are several types of generative AI that basically run on the same premises with regard to privacy, and all of them have caveats.
"I don't think it's an additional concern simply because it comes from the government of China, a country that Brazil has always maintained friendly relations with, as well as with other countries that produce artificial intelligence. It might be an interesting idea, in fact, for the ANPD itself to evaluate this model more closely, precisely because the code is open and, with this, it is possible to learn and have some insights to be applicable to other codes," he concludes.