Posts

AI and Privacy Issues: Data Leaks and Breaches

We recently posted about the AI warning we received from a partner about the use of AI tools and protecting their confidential information. Beyond the specifics of the warning, we quickly saw a much broader context. Using AI tools, if not managed carefully, will result in unauthorized data disclosures, breaches, or leaks. These disclosures may easily violate laws, regulations, industry standards, and contractual obligations. Before exposing your business to unnecessary liabilities, understand how your AI tools and services manage, and ensure, data privacy.

Scope of the AI and Privacy Problem

To gain a better sense of the issue, we decided to look into the data privacy practices of meeting assistants.  Meeting assistants are one of the most commonly used AI tools for small and midsize businesses.  Traditional meeting assistant tools transcribe discussions. Newer versions use AI engines to capture action items, summarize discussion points, and analyze the attitudes and sentiments of participants. We reviewed the terms of service, privacy policies, and FAQs for several services.

Here are some excerpts from our findings (company and service names redacted):

AI Terms of Service

Do not use the service if you need to keep protected or confidential information private:

You hereby represent and warrant to [Company] that your User Content … (ii) will not infringe on any third party’s copyright, patent, trademark, trade secret or other proprietary right or rights of publicity, personality or privacy; (iii) will not violate any law, statute, ordinance, or regulation (including without limitation those governing export control, consumer protection, unfair competition, anti-discrimination, false advertising, anti-spam or privacy);

The [Company] is not liable if you use their services:

… the user understands and accepts the risks involved with the use of AI or similar technologies and agrees to indemnify and hold [Company] harmless for any claims, damages, or losses resulting from such usage.

Allowing an AI engine to analyze your information, or allowing a service to use your information to train their AI-based services, is a disclosure:

When you post or otherwise share User Content on or through our Services, you understand and agree that your User Content … may be visible to others

AI Privacy Policies

Using AI tools has inherent risks:

By utilizing [Company]’s services, the user understands and accepts the risks involved with the use of AI or similar technologies and agrees to indemnify and hold [Company] harmless for any claims, damages, or losses resulting from such usage.

Some tools have service options, at added costs, to ensure data privacy:

… customers that want their data to be strictly segregated (for example, customers dealing with PHI) can choose the [service] option to exercise complete control over their compute and data infrastructure, ensuring that their data is separated per their compliance requirements.

Some services explicitly tell you that sharing confidential information violates their privacy policy:

You may also post or otherwise share only Content that is nonconfidential and that you have all necessary rights to disclose.

The Risks and Challenges with AI

With justifiable concerns about data protection and privacy, we have been trained to think about data leaks and breaches in terms of cyber attacks. We also look at “insider threats,” which are often human errors such as accidentally sharing files externally or putting confidential information in an unsecured email.

The use of meeting assistants and other AI-powered productivity tools creates a new category of risk.  In order to learn and improve, AI tools need to train using information. The easiest way to provide information to train an AI tool is to capture information provided by the users.  The users get their results; the AI tool trains, learns, and improves.

While this works for the AI tool or service provider, it creates a data breach platform for the users unless the tool has specific policies and services to ensure compliance with data privacy laws and regulations. 

Using an unsecured AI meeting assistant creates an incidental, if unintentional, breach. 

Some examples of incidental breaches caused by unsecure AI meeting assistants:

  • Two doctors discuss a patient consult, disclosing personal health information (PHI) to third parties in violation of HIPAA
  • You discuss project details with one of your clients, disclosing confidential intellectual property in violation of your contract
  • Your financial advisor discusses your financial holdings and accounts with you, disclosing personally identifiable financial information in violation of industry regulations and standards

Protect Yourself and Your Business from AI and Privacy Issues

From our review of several AI meeting assistant services, very few will keep your information private. Those that do will charge additional fees.

When you get on a video meeting or conference call, ask the host if their meeting assistant is secure. If not, or if they are unsure, ask them to turn it off.

More generally, take a step back and plan your approach to AI.

  • Consider how and when you want to use AI in your business
  • Make sure you and your team understand your contractual and regulatory responsibilities with respect to information privacy
  • Assess the AI tools and services you plan to use:
    • Understand their data privacy commitments
    • Match privacy policies and commitments against your business and legal requirements
    • Opt-in to agreements that ensure data privacy, even if it requires paying for the service,

With an understanding of your requirements and AI services, AI can add value to your business without introducing significant avoidable risk.

We Can Help

To discuss your technology service needs and plans, click here to schedule a call with a Cloud Advisor or send us an email.

About the Author

Allen Falcon is the co-founder and CEO of Cumulus Global.  Allen co-founded Cumulus Global in 2006 to offer small businesses enterprise-grade email security and compliance using emerging cloud solutions. He has led the company’s growth into a managed cloud service provider with over 1,000 customers throughout North America. Starting his first business at age 12, Allen is a serial entrepreneur. He has launched strategic IT consulting, software, and service companies. An advocate for small and midsize businesses, Allen served on the board of the former Smaller Business Association of New England, local economic development committees, and industry advisory boards.

Our First AI Warning: Why Using AI Services Can Breach Your Contracts

We recently received our first AI Warning. This was not a a general warning such as, “anything built for good can be use for evil” or “AI can replace you.” We received a direct warning about specific uses of artificial intelligence services and our contracts. The warning we received applies to you as well.

Some Background About this AI Warning

Cumulus Global is known for our professional services, including our ability to successfully manage cloud migrations from a variety of local environments. We often provide these services to other technology firms that need our expertise and experience to solve specific client needs. We have standing partnership agreements with several of these firms.

The AI Warning came from one of our partners.

The AI Warning

The warning we received centered on our potential use of AI services and the implication for confidential information belonging to our partner and their clients. The warning stated that providing this data to any AI system or tool is a likely violation of our contract, confidentiality, and non-disclosure agreements.

Specifically:

  • Providing confidential information to any AI system or tool is an authorized disclosure unless we have a contractual agreement in place with the AI vendor that ensures all data remains private and confidential.
  • The use of any confidential information for feeding or training AI system or tool is considered an authorized disclosure. Even if the AI system or tool is private the confidential information will be used outside the scope of any project, work, or need.

In addition to clearly defining limits on the use of their data with AI services, the warning included the company’s intent to pursue any and all contractual and legal methods to prevent, or in response to, disclosures.

Bigger Context

While this AI warning was specific to one business relationship, we see a bigger context. The current flood of AI services is exciting, and the potential uses and benefits are great. If we want to engage, however, we need to be careful. Whether we are deliberately training an AI system or creating prompts and providing feedback to refine answers, we are placing information in the hands of others. Unless we take explicit steps to ensure privacy with AI tools, our expectation must be that the information we provide will be used train the AI service, effectively placing the information in the public domain.

We must also recognize that the generative nature of AI increases the risk of improper disclosure. While we may not intend to disclose information, AI engines can recognize and correlate information. In other words, AI services can piece together data to create and share  information that should be private.

Your Action Plan to Prevent AI Issues

Take a step back and plan your approach to AI.

  • Consider how and when you want to use AI in your business
  • Make sure you, and your team, understand your contractual and regulatory responsibilities with respect to information privacy
  • Assess the AI tools and services you plan to use;
    • Understand their data privacy commitments
    • Match privacy polices and commitments against your business and legal requirements
    • Opt-in to agreements, even if it requires paying for the service, that ensure data privacy

With an understanding of your requirements and AI services, AI can add value to your business without introducing significant avoidable risk.

We Can Help

To discuss your technology service needs and plans, click here to schedule a call with a Cloud Advisor or send us an email.

About the Author

Allen Falcon is the co-founder and CEO of Cumulus Global.  Allen co-founded Cumulus Global in 2006 to offer small businesses enterprise-grade email security and compliance using emerging cloud solutions. He has led the company’s growth into a managed cloud service provider with over 1,000 customers throughout North America. Starting his first business at age 12, Allen is a serial entrepreneur. He has launched strategic IT consulting, software, and service companies. An advocate for small and midsize businesses, Allen served on the board of the former Smaller Business Association of New England, local economic development committees, and industry advisory boards.