Updated Apr 18
One in Fourteen Workers Using China-Based AI Apps: Data Risks Unveiled!

AI Use and Data Security

One in Fourteen Workers Using China-Based AI Apps: Data Risks Unveiled!

A recent study by Harmonic Security reveals that employees are using China‑based AI applications at a surprising rate, raising alarms about data security vulnerabilities. With potential data exposure risks, companies are urged to adopt stringent monitoring, vetting processes, and employee training.

Introduction to AI Application Usage in the Workplace

Artificial Intelligence (AI) applications are revolutionizing workplaces by enhancing productivity and efficiency. However, their integration also poses unique challenges, particularly concerning data security and privacy. A recent study by Harmonic Security highlights that employees are using an average of 254 AI‑enabled applications, with 7% of these being China‑based since security risks can arise from using foreign AI platforms. This statistic underscores the urgent need for companies to implement robust cybersecurity measures to safeguard sensitive information from potential exposure [source](https://itbrief.com.au/story/one‑in‑fourteen‑workers‑use‑china‑based‑ai‑apps‑study‑shows).
    The use of generative AI tools in workplaces is a double‑edged sword. While they can significantly boost productivity, they also expose companies to unprecedented risks. The Harmonic Security study found that approximately 6.7% of prompts submitted to AI platforms potentially exposed company data, with legal and finance data, customer data, and employee data being the most frequently exposed types. This highlights a pressing need for robust data policies to prevent sensitive information from leaking through AI applications [source](https://itbrief.com.au/story/one‑in‑fourteen‑workers‑use‑china‑based‑ai‑apps‑study‑shows).
      ChatGPT has emerged as the most frequently utilized platform for sharing sensitive data. This phenomenon raises significant concerns about data breaches and the integrity of confidential information. To mitigate these risks, the study advises enterprises to adopt stringent data monitoring protocols, restrict the use of personal email accounts for work‑related tasks, and engage in comprehensive employee training to cultivate a security‑aware culture. Harmonic Security advocates for these measures as essential steps in safeguarding company data from inadvertent exposure [source](https://itbrief.com.au/story/one‑in‑fourteen‑workers‑use‑china‑based‑ai‑apps‑study‑shows).
        The inclination towards using personal accounts to access AI applications reflects a trend towards prioritizing convenience and productivity over security. Using personal accounts bypasses established company security measures, making it difficult to monitor and control the flow of information. This issue is exacerbated by the ease with which employees can inadvertently share sensitive data with AI platforms, necessitating companies to enforce strict policies and develop solutions that address this behavior [source](https://itbrief.com.au/story/one‑in‑fourteen‑workers‑use‑china‑based‑ai‑apps‑study‑shows).
          To tackle the security challenges posed by AI applications, Harmonic Security recommends a multi‑faceted approach. By continuously monitoring AI tool usage, thoroughly vetting AI applications, and providing sanctioned alternatives, companies can minimize security risks. Furthermore, implementing context‑aware policies and restricting the use of personal accounts, coupled with targeted training programs, can significantly reduce the risk of data breaches. These strategies aim to cultivate a workplace culture where data security is prioritized, thus protecting both employee and company interests [source](https://itbrief.com.au/story/one‑in‑fourteen‑workers‑use‑china‑based‑ai‑apps‑study‑shows).

            Key Findings from the Harmonic Security Study

            The Harmonic Security study unveils crucial insights into the incorporation of AI in workplace environments. A notable finding highlighted that employees engage with an impressive average of 254 AI‑enabled applications, which includes a significant portion, approximately 7%, being hosted on China‑based platforms such as DeepSeek, Manus, and Ernie Bot. This statistic draws attention to the increasing integration of foreign technological solutions in everyday business operations, underscoring the need for vigilant data protection measures .
              A concerning aspect of the study is the potential for data exposure through the use of AI applications. It was found that 6.7% of the prompts input to these platforms risked leaking company‑sensitive information, nearly half of which came from personal email accounts. ChatGPT was notably the most used platform for such submissions, raising alarms about the reliance on unsanctioned AI tools for handling sensitive data .
                The types of data most frequently exposed in these interactions reflect critical business areas: legal and finance data, customer information, employee data, and sensitive code represented the major categories. This exposure poses not only immediate risks but also long‑term reputational and financial consequences, demanding immediate attention to mitigate potential breaches .
                  Addressing these risks requires a multi‑faceted approach. Harmonic Security advises organizations to implement continuous monitoring of AI app usage, develop robust vetting processes for AI technologies, and establish context‑aware policies that govern data interaction. Moreover, enhancing employee training on the threats posed by AI and restricting the use of personal email accounts for business communications are critical steps towards reinforcing data security .
                    The scale of the Harmonic Security study, encompassing 176,460 prompts from 8,000 users, underscores the breadth of AI application in various corporations. This extensive dataset not only exemplifies current AI utilization patterns but also provides a robust foundation for shaping future data protection strategies and creating resilient AI usage frameworks .

                      Risks of Data Exposure Through AI Platforms

                      The rising integration of AI platforms in workplaces has unveiled substantial risks concerning data exposure, with generative AI platforms being a focal point [1](https://itbrief.com.au/story/one‑in‑fourteen‑workers‑use‑china‑based‑ai‑apps‑study‑shows). The core of these risks lies in the potential leakage of sensitive company information, as highlighted by a Harmonic Security study, which found that around 6.7% of AI‑generated prompts might compromise business data. Workers using personal email accounts significantly contribute to these lapses since these actions often escape the scrutiny and control of corporate IT departments [1](https://itbrief.com.au/story/one‑in‑fourteen‑workers‑use‑china‑based‑ai‑apps‑study‑shows). The implications are stark, considering the categories of most frequently exposed data include legal, financial, and customer information [1](https://itbrief.com.au/story/one‑in‑fourteen‑workers‑use‑china‑based‑ai‑apps‑study‑shows).
                        China‑based AI applications, such as DeepSeek, Manus, and Baidu Chat, are particularly scrutinized due to concerns about data sovereignty and potential foreign access [1](https://itbrief.com.au/story/one‑in‑fourteen‑workers‑use‑china‑based‑ai‑apps‑study‑shows). In a geopolitical landscape where data equates to power, the exposure of corporate secrets and sensitive employee information to these foreign platforms raises questions about national security and regulatory oversight [2](https://www.hoover.org/research/chinas‑rise‑artificial‑intelligence‑ingredients‑and‑economic‑implications). To tackle these challenges, companies are encouraged to implement more stringent vetting and monitoring processes for AI tools, ensuring that employees are utilizing sanctioned technologies within the bounds of corporate policies [1](https://itbrief.com.au/story/one‑in‑fourteen‑workers‑use‑china‑based‑ai‑apps‑study‑shows).
                          The specter of AI‑driven "hallucinations" is another risk factor, but data breaches resulting from AI use pose a more immediate and extensive threat, as reported in various studies [6](https://www.cnbc.com/2024/05/16/the‑no‑1‑risk‑companies‑see‑in‑gen‑ai‑usage‑isnt‑hallucinations.html). Instances like the Samsung data leak illustrate the severe consequences of unchecked employee engagement with generative AI platforms [2](https://www.assemblyai.com/blog/how‑samsung‑accidentally‑leaked‑company‑secrets‑using‑chatgpt‑and‑what‑you‑can‑learn/). As more professionals share sensitive documents and data with AI tools, often without proper authorization, the likelihood of unintentional data exposure escalates [11](https://informationmatters.net/employees‑casual‑ai‑use‑poses‑growing‑data‑security‑risk‑study‑finds/). Hence, the introduction of real‑time detection of sensitive data and mandatory browser‑level visibility standards could serve as effective deterrents [2](https://www.harmonic.security/blog‑posts/how‑many‑ai‑tools‑are‑employees‑using‑more‑than‑you‑think‑and‑often‑with‑personal‑accounts).

                            The Issue of Personal Email Usage with AI Tools

                            The issue of personal email usage with AI tools is becoming a significant concern as it potentially exposes sensitive corporate data to unauthorized access and misuse. With the increasing dependence on AI‑enabled applications in the workplace, it is alarming to note that nearly half of the sensitive data submissions to generative AI platforms are sourced from personal email accounts. This practice undermines enterprise security protocols and poses a challenge for IT departments that struggle to monitor and manage data exfiltration risks effectively. According to a study by Harmonic Security, 6.7% of prompts submitted to AI platforms were at risk of exposing company data. This highlights the need for stringent control measures such as restricting personal account usage and implementing robust monitoring and vetting processes for AI tools [1].
                              The use of personal email accounts to access AI tools can circumvent established security measures, making it difficult to ensure compliance with data protection policies. Such practices increase the vulnerability of organizations to potential data breaches, especially when sensitive information like client data, financial records, and proprietary code is shared with AI platforms. As highlighted in several incidents, including those involving major corporations like Samsung and Microsoft, the risks of unauthorized data exposure via AI tools are substantial [2][3]. To mitigate these risks, Harmonic Security emphasizes the importance of using sanctioned AI alternatives and implementing continuous monitoring to detect any anomalies in data access and sharing activities [1].
                                The trend of using personal email accounts for AI tools also reflects a broader organizational challenge in balancing productivity and security. Employees often find personal accounts easier and faster to use, which can lead to security complacency. Michael Marriott, an expert in cybersecurity, asserts that focusing on real‑time enforcement and the visibility of data transactions at the browser level could significantly reduce the misuse of AI tools. However, this requires a strategic alignment of IT policies with everyday employee behaviors in order to cultivate a culture of security awareness and responsibility [2]. Implementing employee training programs targeting AI usage and potential risks can enhance overall organizational security posture [1].

                                  Recommended Strategies for Mitigating AI Risks

                                  In order to tackle the challenges posed by the use of unauthorized and potentially risky AI applications, enterprises should enforce a strict policy of continuous monitoring. By using advanced analytics and security platforms, companies can identify anomalous patterns and gain insights into AI use that may compromise data integrity. Harmonic Security advocates this approach, emphasizing that continuous monitoring allows for real‑time detection of unauthorized access and data leaks through platforms such as ChatGPT and other China‑based AI apps like DeepSeek and Baidu Chat. Enforcing such meticulous oversight reduces the risk of exposing sensitive legal, financial, and personal employee data [1](https://itbrief.com.au/story/one‑in‑fourteen‑workers‑use‑china‑based‑ai‑apps‑study‑shows).
                                    Another critical strategy involves implementing robust vetting processes for AI tools. Before deploying any AI application, be it for enhancing productivity or streamlining workflows, organizations must meticulously evaluate these tools for security vulnerabilities and compliance with corporate data protection policies. By standardizing a set protocol for vetting, companies can ensure that only sanctioned AI applications are utilized within their infrastructure, minimizing risks of data exposure and unauthorized access. This proactive stance guards against inadvertent data sharing and aligns with expert recommendations for mitigating AI risks [1](https://itbrief.com.au/story/one‑in‑fourteen‑workers‑use‑china‑based‑ai‑apps‑study‑shows).
                                      Fostering an environment where sanctioned AI alternatives are prioritized is also advisable. Companies should invest in developing or adopting AI technologies tailored specifically to their operational needs while ensuring these alternatives comply with data privacy standards. Alastair Paterson, CEO of Harmonic Security, suggests prioritizing the creation of context‑aware policies that regulate how AI tools are accessed and used. These policies should be dynamic, adapting to ongoing technological shifts and emerging threats, to effectively mitigate risks associated with the misuse of AI applications [1](https://itbrief.com.au/story/one‑in‑fourteen‑workers‑use‑china‑based‑ai‑apps‑study‑shows).
                                        Restricting the use of personal accounts for accessing AI tools is another essential measure. Personal accounts often bypass corporate IT protections, making it difficult to monitor data transactions. By enforcing the use of company‑provided accounts and credentials, organizations can maintain tighter control over how their data is accessed and shared. Michael Marriott underscores the importance of shaping employee behavior at the point of use to ensure compliance, suggesting the enhancement of browser‑level visibility and real‑time data detection systems to prevent unauthorized data sharing [2](https://www.harmonic.security/blog‑posts/how‑many‑ai‑tools‑are‑employees‑using‑more‑than‑you‑think‑and‑often‑with‑personal‑accounts).
                                          Finally, targeted employee training on safe AI practices is vital. Educating employees about the risks of sharing sensitive information and the potential implications of data breaches can foster a culture of security awareness within the organization. Training should include guidance on recognizing data privacy threats, handling data appropriately, and understanding the consequences of using unsanctioned AI tools. By investing in comprehensive training programs and ensuring continuous learning opportunities, organizations can significantly reduce the likelihood of accidental or malicious data exposure [1](https://itbrief.com.au/story/one‑in‑fourteen‑workers‑use‑china‑based‑ai‑apps‑study‑shows).

                                            Case Studies: AI‑Related Security Incidents

                                            In recent years, there have been a number of significant security incidents involving AI, illustrating the potential risks as the technology becomes more integrated into corporate environments. One such incident involved Samsung in 2023 when employees inadvertently leaked sensitive company data through the use of generative AI tools like ChatGPT. This included critical information such as source code, internal meeting notes, and hardware details, prompting the company to impose a ban on these tools to protect its intellectual property. Details of this incident reveal vulnerabilities when sensitive corporate data is managed outside traditional secure channels, highlighting the importance of enforcing strict data usage policies and employee education about the risks associated with AI applications .
                                              Another noteworthy case was the exposure of a massive 38 terabytes of sensitive data by Microsoft AI researchers, due to a misconfiguration in an Azure Blob Storage account. This incident underscored the significance of meticulous data management and security protocols in AI development environments. The data breach raised alarms about the potential impact on customer trust and the importance of establishing robust safeguards to prevent unauthorized access to sensitive information . Companies are reminded that even a small oversight in configuration can lead to significant security breaches, emphasizing the need for continual auditing and monitoring of AI development environments.
                                                Apart from specific incidents, growing concerns over AI "hallucinations," where systems generate incorrect or misleading information, compound the challenges. A report from CNBC highlights that 45% of organizations have experienced unintended data exposure during AI deployment, raising alarm over how companies handle sensitive data. While "hallucinations" offer their own set of challenges, the risks tied to actual data leaks are considered far more severe. Organizations are urged to prioritize data security in their AI deployment strategies to mitigate these risks .
                                                  Moreover, the casual sharing of sensitive data by employees with AI platforms poses a considerable threat. Nearly 40% of professionals have admitted to using AI tools to share work‑related information such as documents, financial reports, and source codes without authorization, as per a study referenced by Information Matters. This serves as a stark reminder of the need for enforced policies and robust training programs to ensure that employees are aware of the data security risks and the importance of compliance with company protocols .

                                                    Expert Opinions on Managing AI and Data Security

                                                    The rapid integration of AI technology in the workplace has raised significant concerns among experts regarding data security, particularly with the use of AI applications based in China. Alastair Paterson, CEO of Harmonic Security, emphasizes the critical risks associated with employees utilizing personal accounts to access unsanctioned AI tools. This practice not only circumvents IT oversight and established security protocols but also increases the vulnerability of sensitive data to unauthorized access. Continuous monitoring and implementing a robust vetting process for AI applications are vital strategies recommended by Paterson to mitigate these risks .
                                                      Furthermore, Michael Marriott points out that the inherent ease of accessing AI apps often leads employees to prioritize convenience over security. This behavior is driven by the productivity benefits the tools offer, despite the potential data security threats they pose. Marriott advocates for a focus on real‑time detection of sensitive data exchanges and enforcing security protocols at the point of use. By enhancing browser‑level visibility, organizations can better monitor how AI tools are employed within corporate environments, thereby diminishing the risk of unauthorized data disclosure .
                                                        The complexity of managing AI security is further compounded by the geopolitical implications of using foreign‑based AI applications. The revelation that approximately 7% of these tools are China‑based raises alarms about data privacy and control, drawing attention to potential data sovereignty issues. Experts warn that such exposure could lead to significant economic, social, and political ramifications. On an economic level, breaches can incur financial losses and damage reputations. Socially, identity theft and fraudulent activities could rise if sensitive information is not safeguarded, while politically, there could be national security implications necessitating tighter governmental regulation and international cooperation focused on AI ethics and data protection .

                                                          Public Reactions to AI and Data Security Risks

                                                          Public reactions to AI and data security risks are mixed, reflecting a landscape where innovation collides with concerns over privacy and control. While many recognize the transformative potential of AI technologies, there's an underlying anxiety about how these advances may compromise personal and corporate data. The prevalence of China‑based AI applications, as highlighted in the Harmonic Security study, raises significant apprehensions about data sovereignty and national security, particularly in Western nations. This concern is not unfounded, as the exposure of sensitive information to foreign entities could have far‑reaching implications. Alastair Paterson, CEO of Harmonic Security, underscores the importance of understanding these risks and advocates for stringent monitoring and control measures .
                                                            The public's perception of AI is also colored by incidents of data mismanagement and security lapses which continue to garner media attention. For instance, incidents like the unauthorized data sharing by Samsung employees and the accidental exposure of sensitive data by Microsoft researchers have exacerbated fears around AI usage . Consequently, consumers and businesses alike call for stricter regulatory frameworks to safeguard sensitive data, pushing tech giants such as Microsoft to incorporate new security features like those seen in their Edge browser . This indicates a growing demand for accountability and privacy protection in AI deployment.
                                                              Public apprehension is further stoked by the observed trend of employees using personal accounts and unauthorized AI tools, which sidestep corporate security measures. This not only raises alarms about individual privacy but also escalates risks related to business data breaches. As noted by experts, such practices highlight the need for behavioral changes and real‑time data handling policies within organizations . Addressing these concerns requires educating the workforce about the potential dangers and establishing robust internal frameworks to deter data mishandling.
                                                                In light of these discussions, the public is increasingly aware of the potential for AI 'hallucinations' or algorithmic errors to lead to flawed decisions. However, a significant portion of the discourse is focused on the risks of data exposure over algorithmic inaccuracies . Consequently, many are urging for comprehensive reforms and international cooperation to address these challenges and enhance AI ecosystems globally. The need for vigilance in AI advancements is understood by a wide audience, pushing for an equitable balance between technological advancement and data protection.

                                                                  Future Implications of AI Usage on Security and Policy

                                                                  The rapid integration of artificial intelligence (AI) in business environments poses significant challenges for security and policy frameworks. As noted in the [Harmonic Security study](https://itbrief.com.au/story/one‑in‑fourteen‑workers‑use‑china‑based‑ai‑apps‑study‑shows), many workers rely on AI applications that can compromise sensitive data by using personal email accounts and unsanctioned AI tools. This suggests a pressing need for companies to bolster security measures around AI usage. Enhanced measures may include context‑aware policies, stringent data monitoring, and robust training programs to educate employees about the risks and best practices for using AI responsibly.
                                                                    Another dimension of AI adoption is its economic ripple effect, where the intersection of technology and data management becomes increasingly intricate. While AI boosts operational efficiencies, it also increases the vulnerability to data breaches, thereby potentially causing financial losses and reputational damage, as explored in the [Kiteworks report](https://www.kiteworks.com/cybersecurity‑risk‑management/sensitive‑data‑ai‑risks‑challenges‑solutions/). The scenario demands that businesses meticulously weigh the benefits against the risks, instating rigorous vetting processes for AI applications to mitigate potential breaches.
                                                                      Social implications are also at the forefront, with the surfacing issue of data sovereignty being particularly complex when China‑based AI applications are used, raising red flags regarding privacy and identity security. The [Hoover Institution analysis](https://www.hoover.org/research/chinas‑rise‑artificial‑intelligence‑ingredients‑and‑economic‑implications) highlights the geopolitical ramifications of AI, emphasizing the necessity for international dialogue and robust policy making to address these global challenges. This underscores the importance of creating coherent international standards and agreements on AI governance.
                                                                        On the political spectrum, AI usage in sensitive areas such as national security presents potential vulnerabilities. Leaks of confidential data to foreign AI platforms could exacerbate tensions and lead to serious national security considerations, as implied by the geopolitical backdrop outlined by [Hoover Institution](https://www.hoover.org/research/chinas‑rise‑artificial‑intelligence‑ingredients‑and‑economic‑implications). Governments might thus be prompted to enforce stricter regulations and collaborate internationally to ensure data protection and ethical usage of AI. In the long term, policies need to be as dynamic and adaptive as the technology they aim to regulate.
                                                                          In conclusion, the emergence of AI in business environments necessitates a balanced approach between leveraging technology for productivity and safeguarding against its risks. As the landscape evolves, the insights from entities like [Harmonic Security](https://itbrief.com.au/story/one‑in‑fourteen‑workers‑use‑china‑based‑ai‑apps‑study‑shows) are vital to informing and guiding policy decisions, ensuring that AI tools are used safely and effectively while minimizing their potential threats to both organizational and national security.

                                                                            Share this article

                                                                            PostShare

                                                                            Related News