AI and Privacy Issues: Data Leaks and Breaches

We recently posted about the AI warning we received from a partner about the use of AI tools and protecting their confidential information. Beyond the specifics of the warning, we quickly saw a much broader context. Using AI tools, if not managed carefully, will result in unauthorized data disclosures, breaches, or leaks. These disclosures may easily violate laws, regulations, industry standards, and contractual obligations. Before exposing your business to unnecessary liabilities, understand how your AI tools and services manage, and ensure, data privacy.

Scope of the AI and Privacy Problem

To gain a better sense of the issue, we decided to look into the data privacy practices of meeting assistants.  Meeting assistants are one of the most commonly used AI tools for small and midsize businesses.  Traditional meeting assistant tools transcribe discussions. Newer versions use AI engines to capture action items, summarize discussion points, and analyze the attitudes and sentiments of participants. We reviewed the terms of service, privacy policies, and FAQs for several services.

Here are some excerpts from our findings (company and service names redacted):

AI Terms of Service

Do not use the service if you need to keep protected or confidential information private:

You hereby represent and warrant to [Company] that your User Content … (ii) will not infringe on any third party’s copyright, patent, trademark, trade secret or other proprietary right or rights of publicity, personality or privacy; (iii) will not violate any law, statute, ordinance, or regulation (including without limitation those governing export control, consumer protection, unfair competition, anti-discrimination, false advertising, anti-spam or privacy);

The [Company] is not liable if you use their services:

… the user understands and accepts the risks involved with the use of AI or similar technologies and agrees to indemnify and hold [Company] harmless for any claims, damages, or losses resulting from such usage.

Allowing an AI engine to analyze your information, or allowing a service to use your information to train their AI-based services, is a disclosure:

When you post or otherwise share User Content on or through our Services, you understand and agree that your User Content … may be visible to others

AI Privacy Policies

Using AI tools has inherent risks:

By utilizing [Company]’s services, the user understands and accepts the risks involved with the use of AI or similar technologies and agrees to indemnify and hold [Company] harmless for any claims, damages, or losses resulting from such usage.

Some tools have service options, at added costs, to ensure data privacy:

… customers that want their data to be strictly segregated (for example, customers dealing with PHI) can choose the [service] option to exercise complete control over their compute and data infrastructure, ensuring that their data is separated per their compliance requirements.

Some services explicitly tell you that sharing confidential information violates their privacy policy:

You may also post or otherwise share only Content that is nonconfidential and that you have all necessary rights to disclose.

The Risks and Challenges with AI

With justifiable concerns about data protection and privacy, we have been trained to think about data leaks and breaches in terms of cyber attacks. We also look at “insider threats,” which are often human errors such as accidentally sharing files externally or putting confidential information in an unsecured email.

The use of meeting assistants and other AI-powered productivity tools creates a new category of risk.  In order to learn and improve, AI tools need to train using information. The easiest way to provide information to train an AI tool is to capture information provided by the users.  The users get their results; the AI tool trains, learns, and improves.

While this works for the AI tool or service provider, it creates a data breach platform for the users unless the tool has specific policies and services to ensure compliance with data privacy laws and regulations. 

Using an unsecured AI meeting assistant creates an incidental, if unintentional, breach. 

Some examples of incidental breaches caused by unsecure AI meeting assistants:

  • Two doctors discuss a patient consult, disclosing personal health information (PHI) to third parties in violation of HIPAA
  • You discuss project details with one of your clients, disclosing confidential intellectual property in violation of your contract
  • Your financial advisor discusses your financial holdings and accounts with you, disclosing personally identifiable financial information in violation of industry regulations and standards

Protect Yourself and Your Business from AI and Privacy Issues

From our review of several AI meeting assistant services, very few will keep your information private. Those that do will charge additional fees.

When you get on a video meeting or conference call, ask the host if their meeting assistant is secure. If not, or if they are unsure, ask them to turn it off.

More generally, take a step back and plan your approach to AI.

  • Consider how and when you want to use AI in your business
  • Make sure you and your team understand your contractual and regulatory responsibilities with respect to information privacy
  • Assess the AI tools and services you plan to use:
    • Understand their data privacy commitments
    • Match privacy policies and commitments against your business and legal requirements
    • Opt-in to agreements that ensure data privacy, even if it requires paying for the service,

With an understanding of your requirements and AI services, AI can add value to your business without introducing significant avoidable risk.

We Can Help

To discuss your technology service needs and plans, click here to schedule a call with a Cloud Advisor or send us an email.

About the Author

Allen Falcon is the co-founder and CEO of Cumulus Global.  Allen co-founded Cumulus Global in 2006 to offer small businesses enterprise-grade email security and compliance using emerging cloud solutions. He has led the company’s growth into a managed cloud service provider with over 1,000 customers throughout North America. Starting his first business at age 12, Allen is a serial entrepreneur. He has launched strategic IT consulting, software, and service companies. An advocate for small and midsize businesses, Allen served on the board of the former Smaller Business Association of New England, local economic development committees, and industry advisory boards.