“Hardly any other technology promises such significant productivity gains and, at the same time, such clear cost transparency as AI – an opportunity for companies and the economy alike. And yet, it is often not seized: Instead of acting boldly, many decision-makers in business and politics are guided by fears and cognitive biases. They cite data protection or find other excuses. But those who hesitate risk a great deal.
Artificial intelligence is considered a key technology of our time. It promises more efficient processes, better decisions, and entirely new business models. But in many places, there is no sign of change. Instead of curiosity, there's nervousness, and instead of innovation, there's a blockage.
I recently attended an AI event that was supposed to inspire enthusiasm for the future. But instead of constructively discussing opportunities and framework conditions, as is so often the case, horror scenarios were lined up one after the other. On the panel was a self-proclaimed AI expert, who has called himself that since the release of ChatGPT. His central message: Anyone who uses ChatGPT risks sensitive data ending up on US servers and being reused without supervision. The effect in the room was palpable. Many felt vindicated in their decision not to use AI.
There are numerous other examples that demonstrate a similar pattern of behavior: A company foregoes AI-supported maintenance software because it is unsure whether machine data could be considered personal. A municipality abandons a pilot project for traffic flow optimization, even though the movement data could be completely anonymized. What all these cases have in common: It's not the actual misuse of data that prevents innovation, but the fear of it. And this fear is often irrational. In many cases, it follows psychological patterns well described in behavioral economics.
How psychological mechanisms prevent investments in AI
A key mechanism is loss aversion, described by Daniel Kahneman, Nobel laureate in economics and pioneer of behavioral economics. According to this theory, many people weight potential losses more heavily than equally large gains. The hypothetical reputational damage caused by a data protection issue often appears more threatening than the real efficiency gains from an AI application. This leads to fatal avoidance behavior. In the IT industry, it is often said: "No one ever got fired for buying SAP." The idea behind this is that, for example, a botched open source project endangers a personal career more than an overly expensive SAP implementation. Today, it is essentially true that no one has ever been fired for blocking an AI project due to data protection reasons – regardless of the opportunities lost.
Furthermore, anyone who stops a project due to data protection reasons is morally on the safe side. They don't have to assume responsibility for unknown risks – and can simultaneously invoke a high ethical principle. This is convenient, provides personal protection, and appears responsible. And this even if it hinders collective progress.
The current AI Monitor from TU Darmstadt, a representative survey of more than 2,000 participants in cooperation with the opinion research institute You-Gov, also cites data protection as the greatest risk associated with the use of generative AI – ahead of hallucinations, the loss of human abilities, ignorance of the origin of information, and AI's energy consumption. What's striking is that women express concerns more often than men, tech-savvy individuals rate the risks significantly lower than those with less information, and older people are more skeptical than younger people. Let's examine whether and to what extent these concerns are truly justified.
Price and Product Differentiation: What Everyone Needs to Know
A central point in many data protection debates is the question of whether inputs into ChatGPT or similar language models are automatically saved, read, or used to train the models. The answer depends largely on the version used. Like many software manufacturers, most well-known AI companies also engage in price and product differentiation: similar versions of their products are offered at different prices with the goal of maximizing revenue.
Open AI, for example, offers its services in several versions – from the free basic version to "ChatGPT Enterprise" with comprehensive data protection and guaranteed data isolation.
While the free version uses user input by default to improve the models, the Enterprise version enables encrypted data transmission, no use of input for training purposes, and contractually guaranteed processing according to company-specific data protection guidelines. Google takes a similar approach with its Gemini models.
Data Protection: Why AI is often subject to double standards
However, the prejudice that AI is hostile to data protection persists, as the above-mentioned example from an event demonstrates: Generative AI is supposedly a data protection minefield. Language models such as ChatGPT can easily be operated within companies in such a way that no sensitive information is leaked unprotected. On the one hand, this is possible entirely internally – with their own infrastructure, in isolated networks, and with clearly regulated access rights. In this scenario, all inputs and outputs remain within the company's own IT environment, and data protection and confidentiality are entirely in the company's own hands.
On the other hand, cloud operations can also be GDPR-compliant if they rely on professional offerings that guarantee encrypted transmission and do not use inputs for training purposes.
From a data protection perspective, there is no fundamental difference between the use of language models and, for example, Outlook or other common email applications. Both can be operated GDPR-compliant in cloud environments – even with infrastructure from American providers such as Microsoft or Google. This very point is overlooked in many debates or is simply not known. And in practice, prompts generally do not contain more sensitive information than regular business emails.
Europe has cultivated a reflex to caution
Of course, the high penalties for violations provided for in the GDPR and the EU AI Act also fuel fears. This uncertainty often slows the pace of innovation in Europe. Caution has often become a reflex in this country: as soon as a new technology is announced, an extensive debate about its risks, regulation, and potential negative developments often begins. This demand for thoroughness is part of our political culture – and not without its benefits. But all too often, it devolves into a paralyzing wait-and-see attitude that sets us back in strategic areas of the future.
The internet serves as a cautionary tale: In the 1990s, companies in the US and Asia built platforms, markets, and business models, while Europe spent a long time discussing potential risks such as controllability, data protection, and technological maturity. Indeed, Tim Berners-Lee, the inventor of the internet language HTML, was critical of his own work. However, this did not alter the internet's global triumph. History shows that, due to network effects, the best technologies often do not prevail. The world's most valuable companies are predominantly from the US, and they are those that courageously embraced the new internet technologies back then.
Today, we are experiencing déjà vu in the field of artificial intelligence, especially in the important field of generative AI. Let's take a look at the so-called chatbot arena. This is a platform that compares the quality of different language models. The top ten list is shown in the figure.
Open AI leads the ranking with its recently released GPT-5 language model, replacing Google's Gemini 2.5-Pro, which held first place for a long time. Other Open AI models, as well as language models from the Open AI spin-off Anthropic and Elon Musk's xAI, are also among the top ten. There are also three Chinese models. As with the internet, Europe plays no role on the provider side.
Why even a clearly positive return doesn't motivate action
This is precisely why it is crucial to boldly move forward, at least on the user side. Generative AI has the potential to raise productivity and innovation to new levels – a once-in-a-century opportunity for companies of all sizes, from corporations to medium-sized businesses. Numerous studies already demonstrate significant productivity improvements:
A joint study by Stanford University, the Massachusetts Institute of Technology (MIT), and Microsoft in the area of customer service shows that the use of generative AI increases the number of customer issues resolved per hour by 15 percent. The productivity gain was particularly significant among less well-trained employees—in this group, the increase was as much as 36 percent.
Another MIT study found that employees were able to complete their tasks approximately 35 percent faster with the support of ChatGPT. In addition—and this is often forgotten—the quality of the text and content, as well as the originality of the results, were evaluated by experts. The result was that texts created with the help of ChatGPT performed better than those without AI support. A controlled study by the Boston Consulting Group produced similar results.
In the AI podcast of the Frankfurter Allgemeine Zeitung, Hartmut Jenner, CEO of Kärcher, reports on daily time savings of about an hour in his work.
In materials research, an MIT study shows that researchers who use AI discover new materials 44 percent faster, file patents 39 percent earlier, and develop product prototypes 17 percent faster.
Several studies also confirm comparable productivity gains in software development. According to Sundar Pitchai, CEO of Alphabet and Google, AI now generates more than 30 percent of new code at Google.
A business plan or investment model isn't even needed for a business evaluation—the opportunities are so obvious. In addition, the costs of generative AI are comparatively transparent and easy to calculate.
Both the costs for licenses (often in the range of €20 per user per month) and for input and output tokens for API-based use are known.
This makes it all the more incomprehensible that so many companies are hesitant. Sometimes it seems as if they are practically looking for excuses not to act—be it data protection, a supposedly declining position on the Gartner curve, which supposedly proves that generative AI is on the decline, the lack of a unique selling proposition, or other concerns.
The world will likely look different. Jensen Huang, CEO of the technology company Nvidia, sums it up this way: "It's not artificial intelligence that will replace your job—it's the person who uses it. Those who ignore AI will soon be overtaken in the competition by those who use it."
Caution is a virtue, but it shouldn't become an excuse for inaction. Those who fail to set standards in a key technology, either as developers or users, become dependent on external providers. And in doing so, they also give up the opportunity to help shape future standards. In many cases, these will be US or Chinese providers, as the Chatbot Arena shows. Since Donald Trump returned to US President, this is even more true for Europe. Those who lag behind technologically risk not only economic dependence but also geopolitical vulnerability.
Europe faces a strategic choice: Either we learn to seize opportunities before they are completely lost—or we become permanent consumers of technological innovations from other regions of the world. The time to think about this is almost over.
The Real Danger: Standing Still in the Name of Data Protection
Of course, data protection is a valuable asset – especially in the age of data-driven technologies. The protection of personal information must never be compromised. But precisely for this reason, we must design it in a way that enables innovation, not hinders it. All too often, however, the opposite happens: Out of concern about potential risks, projects are halted long before the opportunities and risks have even been weighed against each other. The reasons are often psychological biases or a lack of knowledge.
The result is a noticeable weakening of our location. Startups are moving away, and investments are flowing to places where data protection is understood as a principle that can be shaped and not as a brake on innovation. As a result, Europe is not only squandering startups and capital, but also the opportunity to translate excellent research results and technological potential into market-relevant innovations. The answer lies not in less, but in smarter data protection – as an integral part of the design, not as a hurdle at the end of the process. "Privacy by Design" instead of a blanket blockade.
With common sense, reliable providers, and clear rules, generative AI can already be used today in a data protection-compliant and cost-effective manner. Continuing education is a crucial lever in this process—not only to expand skills, but also to reduce fears.
Furthermore, data protection is often only the most visible pretext. As discussed above, there are often other reasons behind this for not investing and preferring to wait and see. All these arguments have one thing in common: They justify inaction without naming the price of hesitation.
The crucial question is not whether we can, but whether we want to. This technology represents a rare opportunity. If we block it out of fear of hypothetical risks or out of a complacency to take responsibility, we will squander real opportunities for prosperity, competitiveness and social progress. Those who only focus on potential risks miss out on the gains for companies and economies. The history of the internet shows that those who weigh things up too long eventually end up negotiating only the rules of the game – while others have long since won the game.
Peter Buxmann is a university professor of business informatics at the Technical University of Darmstadt. He also works in the business world as a supervisory board member and company founder, and co-hosts the Frankfurter Allgemeine Zeitung podcast on artificial intelligence.” [1]
What is Enterprise analog in Google AI for companies, well protecting data of these companies?
The primary Google AI platform for businesses that need to protect data is Vertex AI. It runs on the secure infrastructure of Google Cloud. Google Workspace with Gemini is another option, providing integrated AI within productivity tools. Both platforms ensure customer data is not used to train public models.
Vertex AI platform
Vertex AI is a platform for building, deploying, and managing enterprise AI applications. It offers a secure environment that allows companies to control their data and models.
Key data protection features include:
Data ownership and control: Google states that customer data will not be used to train or improve its own large language models (LLMs) without explicit permission. Your data, prompts, and generated outputs remain your intellectual property.
Data isolation: Customers can run AI workloads and models in isolated, private networks using Virtual Private Cloud (VPC) and Private Services Connect (PSC). This prevents unauthorized external access to sensitive data and models.
Confidential computing: Confidential AI on Vertex AI encrypts data while it is being processed in memory. This ensures the data is inaccessible even to Google's internal teams.
Data anonymization and masking: The Sensitive Data Protection API can automatically discover, classify, and de-identify sensitive information before it is used for model training or analysis.
Customer-managed encryption keys (CMEK): You can encrypt data and AI model artifacts using your own encryption keys.
Role-based access control: Vertex AI integrates with Google Cloud's Identity and Access Management (IAM) to enforce fine-grained permissions.
Google Workspace with Gemini
Gemini for Google Workspace provides an enterprise-grade AI assistant for companies using Google's productivity tools. All interactions are protected with enterprise-level security by default.
Key data protection features include:
Data not used for training: Prompts and generated content are not used to train general Gemini models and are not reviewed by humans. The data remains within your organization's domain.
Protections from existing policies: Existing Google Workspace security and data loss prevention (DLP) policies are automatically applied to interactions with Gemini.
Compliance support: Gemini for Workspace has achieved certifications such as ISO 27001/17/18 and SOC 1/2/3 and can assist companies with their compliance needs, such as HIPAA.
AI Protection on Google Cloud
This security framework integrates with Google's Security Command Center to protect the entire AI lifecycle. It provides centralized visibility and control over AI assets.
Specific features include:
Model Armor: Proactive defenses against AI-specific threats like prompt injection, jailbreaking, and sensitive data leaks.
AI risk assessment: It can discover and catalog AI assets, assess vulnerabilities, and simulate attack scenarios to help security teams proactively address risks.
Threat management: Capabilities to detect, investigate, and respond to threats targeting AI workloads and models.
Is is Vertex AI allowing a client company to train its own AI on Google servers and keep th results of training for exclusive client's use?
Yes, Vertex AI allows client companies to train custom AI models on Google's infrastructure, and the resulting models and their training data are for the exclusive use of the client. Google's policies state they do not use client data for their own model training without explicit permission, and the platform is designed with enterprise-grade security and compliance features to ensure data privacy and control for businesses.
How it works:
Custom Training:
You can use Vertex AI's custom training feature to run your own machine learning code and use your own datasets to train models on Google's infrastructure.
Data & Model Exclusivity:
When you train a model on Vertex AI, the data you provide remains yours, and you retain full ownership and exclusive use of the trained model.
Control over Training:
You have significant control over the training process, including choosing your preferred ML framework and specifying the compute resources and containers for your training job.
Enterprise-Grade Security:
Vertex AI is built for enterprise use, providing secure and compliant environments for managing and deploying AI models.
Google's Data Policy:
No Data Usage:
Google Cloud states that they will not use your data to train or fine-tune their own AI/ML models without your prior permission.
Customer Data Processing Addendum (CDPA):
Further details on data handling are available in the CDPA, reinforcing commitment to customer data privacy and control.
1. Angst, Ausreden, Abwarten - wie wir die Jahrhundertchance Künstliche Intelligenz verspielen. Frankfurter Allgemeine Zeitung; Frankfurt. 18 Aug 2025: 18. Von Peter Buxmann
Komentarų nėra:
Rašyti komentarą