Responsible AI

Empowering developers to build beneficial AI systems

Mayank Kumar

@munk

Guidelines for Responsible AI Use at Work

Submitted Aug 1, 2024

When interacting with AI platforms, it’s essential to take precautions to protect intellectual property, prevent the leak of proprietary information, and minimize security risks. Here are some guidelines to follow:

Remove proprietary information before querying AI platforms

  • Sanitize input by removing or replacing sensitive data such as database names, internal library names, and other identifiable information.
  • Use generic placeholder names (e.g., “COMPANY_NAME” instead of your actual company name).

Avoid publishing full code files

  • Instead of sharing entire code files, focus on specific snippets or pseudocode that illustrate your question or problem.
  • Ensure that shared code doesn’t contain comments or identifiers that could reveal proprietary information.

Be cautious with AI-generated output

  • Avoid directly copying and pasting AI-generated content, as it may lead to license infringement (most models are trained on datasets including restrictively licensed material).
  • Use AI-generated content as inspiration or a starting point, but always review and modify it to ensure originality and compliance with your organization’s policies.

Avoid using names of individuals in prompts

  • When generating content (code, images, etc.), refrain from using names of specific artists, developers, or other individuals.
  • Use generic descriptors or styles instead of named entities.

Implement data anonymization techniques

  • Use anonymization libraries and tools to remove or mask sensitive information before interacting with AI platforms.
  • Consider employing PII (Personally Identifiable Information) redactor models to automatically identify and remove sensitive data.

Establish clear organizational policies

  • Develop and communicate clear guidelines for employees on the use of AI tools, including what types of information can and cannot be shared.
  • Provide training on best practices for interacting with AI platforms securely.

Use enterprise-grade solutions when available

  • For organizations handling highly sensitive data, consider using enterprise-level AI solutions that offer enhanced security and compliance features.
  • Evaluate open-source alternatives that can be deployed on-premises for maximum control over data.

Stay informed about AI platform updates

  • Regularly review the terms of service and privacy policies of the AI platforms you use.
  • Subscribe to updates or announcements from AI providers to stay informed about changes in data handling practices.

Consider data residency and geopolitical factors

  • Be aware of where AI platforms store and process data, especially if your organization is subject to specific data residency requirements.
  • Consider the potential impact of geopolitical events on access to AI services and plan accordingly.

By following these guidelines, individuals and organizations can significantly reduce the risks associated with using AI platforms while still leveraging their powerful capabilities. Remember to regularly review and update your practices as the AI landscape continues to evolve.


Do you have additional ideas or best practices for using AI responsibly at work? We’d love to hear from you! Please leave your suggestions in the comments below. Your input will be reviewed, and this living document will be updated to reflect the collective wisdom of our community. Together, we can create a more secure and responsible AI ecosystem.

Comments

{{ gettext('Login to leave a comment') }}

{{ gettext('Post a comment…') }}
{{ gettext('New comment') }}
{{ formTitle }}

{{ errorMsg }}

{{ gettext('No comments posted yet') }}

Hosted by

We're committed to understanding and communicating the intricacies and possibilities of the community owned internet.