GenAI startups discuss use cases, challenges and safety for consumers

On 23rd February, 2024, The Fifth Elephant organized a roundtable discussion to understand the impact of generative AI in the Indian industry. Startup founders and developers innovating in this space were invited to discuss various use cases of generative AI in their domains, as well as risks and challenges associated with the same.

The participants represented domains such as healthcare, creative AI, e-commerce, life sciences, research, hardware development, and cybersecurity. Mercari India was the venue host for this session.

This report outlines the points that were made and insights from the participants’ experiences about the impact of GenAI in different domains or applications areas.

The roundtable was moderated by Anantharam Vanchi Prakash, co-founder and architect in residence at xmplify.tech

Participants included:

  • Dr Vikram Vij, Sr. Vice President at Samsung Electronics
  • Yuki Ishikawa, Vice President for Generative AI/LLM at Mercari, Inc.
  • Soumyadeep Mukherjee, co-founder and CTO at Dashtoon
  • Bargava S, Chief Product and Data Officer at 5C Network
  • Akshat Gupta, Tech Lead for Machine Learning at Glance InMobi
  • Abhishek H. Mishra (Tokenbender), creator of CodeCherryPop LLM series
  • Anand Janakiraman, COO at Strand Life Sciences
  • Apurv Mehra, co-founder at BlendNet
  • Sanchit Garg, CTO at Blendnet
  • Nilesh Trivedi, co-founder and CTO at Snowmountain.ai
  • Sachin Dharashivkar, co-founder at AthenaAgent
  • Sasank Chilamkurthy, founder and CEO at Von Neumann AI

KEY INSIGHTS

Creativity, art and content

With generative AI tools, individuals no longer require fine arts training to produce high-quality content, leading to improved visual content across various mediums. Moreover, the technology enables the generation of text, images, and videos, fostering more democratic innovation in content creation while significantly reducing production costs. As a result, businesses experience higher audience-to-customer conversion rates, making generative AI a valuable tool for content creators and marketers alike.

However, the widespread adoption of generative AI also raises concerns regarding its long-term implications. There is apprehension about a future dominated by AI-generated and personalized content, potentially diminishing human creativity. Moreover, artists face the threat of losing their livelihoods as generative AI becomes more prevalent in content creation. Additionally, the lack of robust guardrails and methodologies for evaluating generated content poses challenges for content moderation, which primarily relies on labor-intensive manual processes.

Despite these risks and fears, the evolving landscape of generative AI underscores the importance of striking a balance between innovation and responsible usage.

Use Cases and Benefits Risks and Fears
Anyone can be creative without fine arts training using GenerativeAI A future where most content is generated and personalized
Improved visual content quality Human creativity may be impaired. However, it is hard to impair human creativity as humans will find ways to enhance their creativity with or without GenerativeAI
Generating text, images, and videos for content innovation Artists will lose their livelihoods to GenerativeAI
Reduced cost for generating content, and audience to customer conversion rates are higher Limited possibility of guardrails and there is a crucial need for content moderation. Moderation is largely manual which is labour intensive and has multiple challenges
Methodologies for evaluating generated content is lacking

Radiology and healthcare

By leveraging generative AI, healthcare providers can generate radiology reports consistently across various languages, ensuring accuracy and efficiency in diagnostic processes. Moreover, AI applications in radiology enable the expedited analysis of medical scans, alleviating the immense workload of radiologists and enhancing overall productivity. The development of AI copilots further extends healthcare accessibility to underserved populations in tier 2 and tier 3 cities, facilitating timely medical scans and subsequent healthcare interventions, thereby bridging gaps in healthcare access.

However, the integration of generative AI in radiology and healthcare also presents certain risks and challenges. One notable concern is the potential increase in costs associated with implementing and maintaining AI systems, making the combination of generative AI and human expertise more expensive than relying solely on human resources. Additionally, there is a pressing need for accountability measures to be implemented within healthcare workflows, ensuring clear delineation of responsibilities among teams and individuals involved in utilizing generative AI technologies.

Use Cases and Benefits Risks and Fears
In a country with high diversity of languages, GenerativeAI has helped with generating radiology reports maintaining a level of consistency Increased costs making the mix of GenerativeAI and humans more expensive to work with than just humans
AI in radiology helps to fast-track the diagnostic process, reduce the immense workload of radiologists, and increase productivity There need to be measures for accountability in place with teams and individuals owning specific parts of the workflow
Building copilots to provide under-served people, especially in tier 2 and tier 3 cities, improves their access to medical scans and subsequent healthcare

R&D and engineering

From serving as copilots for coding and testing to facilitating translation tasks, generative AI tools can streamline various aspects of the R&D process, enhancing efficiency and productivity. Moreover, the technology enables the generation of reports and SQL queries and effectively structuring unstructured data. Furthermore, the ability to use smaller or more domain focused datasets to build tools for specific use cases (small language models) allows for the creation of highly specialized and efficient solutions, often comparable or superior to existing models like GPT-4.

The integration of generative AI in R&D and engineering also presents certain risks and challenges. One significant concern revolves around the selection of appropriate large language models (LLMs) for building tools, along with the decision of whether to develop them in-house or utilize existing models. Additionally, there are apprehensions regarding data security, particularly the risk of data leaks involving confidential company data and source code. The scarcity of GPUs and the associated increased costs also pose challenges for companies, although efforts are underway to develop strategies for cost reduction. Furthermore, the current lack of strategies and tools for LLMops (large language model operations) and the centralized control of data and compute resources by big tech companies underscore the need for innovative solutions and collaborative efforts to address these challenges.

Use Cases and Benefits Risks and Fears
Copilots for coding and testing Choosing which LLM to use to build tools and whether to build one in-house
Translation Data leaks, especially confidential company data and source code
Report generation and SQL Lack of GPUs
Using datasets to build tools for a narrow use case allows it to be comparable, or even better, than GPT 4 (small language models) Increased cost but companies are working on strategies to reduce costs
Structuring unstructured data Strategies and tools for LLM-Ops are lacking currently
Evaluation models present a good use case for finetuning Data and compute are centralized and access is controlled by big tech companies
Skills required for AI, deep learning, ML, etc. are quite niche and expensive to come by

Hardware and communications

The development of generative AI tools require significant resources and hardware, which makes the process expensive and benefits hard to access. Lately, the cost of Intel chips has seen a decline, making advanced hardware more accessible and affordable for various applications. Moreover, generative AI is instrumental in enhancing last-mile internet and information connectivity by leveraging multiple forms of connectivity, including satellites, and employing AI-driven capabilities such as translation and personalization. This has significant implications for education and remote learning, particularly in bridging information gaps and improving accessibility to educational resources in underserved areas.

However, while hardware advancements have been notable, the software infrastructure required to fully leverage these capabilities is still lacking, highlighting the need for further development in this area. Additionally, there is a growing concern regarding the proliferation of mis- and disinformation, exacerbated by generative AI’s ability to create convincing but false narratives (hallucination) that are challenging to detect. This poses a significant challenge, particularly considering the digital divide and varying levels of tech literacy.

Use Cases and Benefits Risks and Fears
Cost of Intel chips has fallen Hardware is strong but the software is not ready to take advantage. There is a need to build such software.
Boosting last-mile internet and information connectivity using multiple forms of connectivity satellites and generative AI for translation, personalization, etc. especially for education Increase in spread of mis- and dis-information (hallucination) which is also harder to detect, especially considering the digital divide and vast difference in levels of tech literacy

Cybersecurity

By utilizing generative AI algorithms, cybersecurity professionals can generate novel payloads to simulate various attack scenarios, enabling them to proactively identify and address potential security loopholes in software systems. This approach not only enhances the efficiency of vulnerability detection but also facilitates the development of robust security measures to safeguard digital assets and sensitive information.

However, one notable concern is the potential for regulatory capture, wherein regulatory frameworks fail to keep pace with the rapid evolution of generative AI technologies, leading to an increase in software bugs and vulnerabilities. Additionally, there is a recognized need for the development of efficient LLMOps strategies and practices within the cybersecurity domain. The slower pace of LLMOps development poses challenges in effectively managing and deploying generative AI algorithms for cybersecurity purposes.

Use Cases and Benefits Risks and Fears
GenerativeAI is being used in cybersecurity to generate novel payloads to figure out vulnerabilities in existing code bases and new code bases Regulatory capture will lead to bugs skyrocketing. Presently, there is limited knowledge on how to solve them
There is a slower development of LLM-Ops strategies and practices

E-commerce

By employing technologies like GPT (Generative Pre-trained Transformer), e-commerce platforms can better understand and anticipate customer needs through natural language processing. This enables more seamless and personalized interactions between customers and chatbots, enhancing the overall user experience and driving customer satisfaction. Additionally, generative AI in e-commerce facilitates efficient conversation processing, allowing businesses to streamline communication channels and provide timely assistance to customers, ultimately leading to increased engagement and conversion rates.

However, the integration of generative AI in e-commerce also raises significant privacy concerns and risks related to data usage and algorithmic bias. With access to customer conversations on chatbots, there is a potential breach of privacy as sensitive information may be exposed to the GPT model. Moreover, e-commerce companies may leverage this data to manipulate supplier visibility and influence which suppliers are prioritized or supported by the algorithm. This introduces concerns about transparency and fairness in algorithmic decision-making, highlighting the need for robust data privacy regulations and ethical guidelines to govern the use of generative AI.

Use Cases and Benefits Risks and Fears
Conversation processing by GPT to understand what the customer wants Privacy concerns as the GPT will have access to the customer’s conversations on the chatbot
E-commerce companies using the above data to control which suppliers are on the app and are supported by the algorithm

Small Language Models

Small Language Models are built to specialize in low logic tasks. This allows one to curate datasets by identifying certain attributes in a way that lets them build a pipeline where on one hand, it synthesizes new information and evaluates it for a particular level of quality, as well as rejects that which does not meet the criteria. With grounded feedback, which is relatively easy to come by in India and is affordable, one can build very good small language models, which eventually become a source of information to build bigger models to ensure they achieve higher quality.

Challenges of authenticity, privacy, other harms to humans

The overall risks associated with generative AI encompass issues of authenticity, privacy, and potential harms to individuals, necessitating the implementation of checks and balances. While generative AI offers opportunities for innovation, there is a pressing need to address concerns such as the detection of AI-generated content versus real content, as well as human resistance and distrust towards such technology. Cultural nuances play a role in shaping perceptions, and individuals may require time to acclimate to new technological advancements. Additionally, the explainability of AI-generated content poses a significant challenge, highlighting the complexity of understanding the data that generates the content.

New use cases for GenerativeAI

Certain use cases for generative AI that have not yet been explored fully were also discussed during the session, including data structuring and storage for streamlined processing, wherein it can efficiently organize and manage datasets to facilitate seamless data analysis and utilization. Additionally, the technology can automate code checking processes by employing specialized debug tokens, thereby enhancing software development workflows by identifying and resolving bugs more effectively.

AI ops is another promising application, leveraging generative AI to automate operational tasks and optimize system performance, leading to increased efficiency and scalability in various operations. Furthermore, generative AI can be harnessed to develop tools for understanding Runbooks, enabling organizations to comprehensively analyze and interpret operational procedures and protocols for improved decision-making and workflow management.

IN SUMMARY

To conclude, generative AI has led to numerous innovations across domains and enhances the possibility of bridging the digital divide. On the other hand, every use case comes with its own set of unique risks, and as is the case with any innovation, there must be a balance between development and safety.

This discussion highlighted specific areas where further research as well as policy developments were required. It is essential that developers continue to discuss and deliberate on innovations as well as strategies and practices for risk mitigation.

About The Fifth Elephant

The Fifth Elephant is a community funded organization. If you like the work that The Fifth Elephant does and want to support meet-ups and activities - online and in-person - contribute by picking up a membership.

Contact

For inquiries, leave a comment or call The Fifth Elephant at +91-7676332020.
Join The Fifth Elephant Telegram group or WhatsApp group.

Hosted by

Jump starting better data engineering and AI futures