We previously explored what Shadow AI and BYOAI are and how the unauthorized or unmanaged use of AI tools and platforms can pose risks to companies. To illustrate the impact of BYOAI (Bring Your Own AI) and Shadow AI on various industries, here are several real-world examples and scenarios from the technology, finance, and healthcare sectors that have affected large Fortune 500 companies. These case studies demonstrate how Shadow AI was introduced and how the companies responded.
JPMorgan Chase Restricts AI Use
Citing compliance and privacy concerns, JPMorgan and several other major banks (Bank of America, Citigroup, Deutsche Bank, Wells Fargo, Goldman Sachs) banned or restricted employee use of ChatGPT and similar tools at work. As generative AI usage began to ramp up, financial institutions started to worry about data leaks and compliance risks associated with unapproved use. To play it safe, many banks moved quickly to safeguard sensitive client information.
Amazon Detects Internal Data in ChatGPT
Amazon discovered that ChatGPT responses closely mirrored their internal proprietary data, leading them to conclude that employees had started using the large language model and inputting sensitive information. In response, Amazon’s management issued a directive banning staff from sharing code or sensitive information with external AI providers.

Samsung Leaks Confidential Code via ChatGPT
At Samsung’s semiconductor division, engineers started using ChatGPT to assist with coding and meeting notes. Within three weeks, three incidents occurred in which employees accidentally leaked sensitive data, including Samsung’s proprietary source code and an internal meeting transcript, by entering it into ChatGPT. Once the data was fed into OpenAI’s system, Samsung lost control. Following the leaks, Samsung swiftly implemented a ban on external generative AI tools on its corporate networks and initiated an internal investigation. The company also began developing its own private AI tool to reduce its reliance on public models. It’s now seen as a cautionary tale of Shadow AI gone wrong.
Apple Bans ChatGPT and GitHub Copilot for Employees
Citing the risk of leaks, Apple banned its employees from using ChatGPT, Microsoft’s GitHub Copilot, and similar AI tools at work. Apple’s leadership was worried that staff might unknowingly share proprietary information (like software code for unreleased products) with systems linked to outside providers. Notably, Apple’s restrictions came shortly after the Samsung incident, and a Wall Street Journal report highlighted that these AI services were supported by competitors (Microsoft, in this case). Apple’s move highlighted the importance of protecting intellectual property in the age of BYOAI.

Shadow AI in Healthcare: A Risk on the Horizon
Shadow AI hasn’t made headlines in healthcare yet, but it’s still a serious concern. Some doctors and hospital staff have been quietly testing tools like ChatGPT to assist with writing patient visit notes and referral letters, as well as analyzing clinical data. In doing so, some individuals have unintentionally entered protected health information (PHI) into third-party AI systems, thereby violating patient privacy laws. For example, a physician might input parts of a patient’s medical record into ChatGPT to generate a summary letter. This seemingly harmless time-saver constitutes a HIPAA breach because the PHI leaves the hospital’s secure system and is passed to OpenAI’s servers.
Such an action could lead to government investigations and fines for the healthcare provider. No specific hospital has publicly reported a significant penalty related to ChatGPT and HIPAA, but medical centers remain cautious. Experts from USC and legal firms have issued warnings in medical journals, highlighting how easily a doctor could breach privacy rules by using generative AI.
Some hospitals are implementing training and technical measures to prevent clinicians from pasting patient data into chatbots. This situation highlights that, in healthcare, the cost of Shadow AI could be legal penalties and loss of patient trust.

Additional Shadow AI Examples as Similar Patterns Emerge
Insurance & Legal: Employees at insurance companies or law firms might use AI tools to draft policies or contracts, unintentionally exposing client data or confidential case details, which raises liability and confidentiality concerns.
Education: Teachers or administrators using AI to manage student data could violate privacy laws like FERPA. Some school districts have banned ChatGPT, fearing employees may leak student information or exam materials.
Manufacturing & Energy: Engineers might use AI to improve designs or maintenance schedules using proprietary data about facilities, which could be sensitive for safety or competitive reasons.
Government: Government workers have been advised against using unauthorized AI for official documents because of security classifications and citizen data privacy. For example, some national governments temporarily blocked tools like ChatGPT on work devices while conducting security reviews.
Conclusion
Each of these sectors faces a common challenge: how to leverage AI’s productivity benefits without compromising data security and compliance. The case studies above, especially those from 2023, demonstrate that many top organizations responded promptly to early BYOAI incidents. The trend is evident across industries, unapproved AI use is widespread, and effective governance is now a top priority to prevent minor mistakes from escalating into major disasters.
Don’t Let Shadow AI Put Your Business at Risk
TrustedTech can help you secure your organization against unauthorized AI use, protect sensitive data, and ensure compliance before a minor mistake becomes a major disaster.


