Unavailable

This livestream is restricted

Already a member? Login with your membership email address

Tickets

Loading…

Karthik Shashidhar

@karthiks

Sreechand Tavva

Sreechand Tavva

@sreechand Contributor

Vikram Nayak

@vikramnayak Contributor

Amit Kapoor

Amit Kapoor

@amitkaps Contributor

Birds of Feather (BoF) Session Summary: AI disrupting Data Visualization

Submitted Nov 26, 2025

Birds of Feather (BoF) Session Summary: AI disrupting Data Visualization

The BoF took place on 4th of December 2025 at Samarthanam Auditorium, HSR Layout. About 30 people attended, from various backgrounds including data science, data engineering, product and data visualisation.

We spoke about what we’ve been doing in terms of getting LLMs to help us visualise data better, best practices in terms of producing effective charts, potential pitfalls in having LLMs generate visualisations and how LLMs are raising the floor (in terms of the charts being produced) and the ceiling (in terms of what is possible).

BOF image 1

Notable insights & discussion highlights

The Two-Phase Framework: We spoke about a clear structure for thinking about visualization work:

  • Explore phase: Starting with fuzzy problems, collecting/cleaning data, modeling, discovering insights
  • Explain phase: Designing for stakeholders, creating narratives, testing with audiences
  • Most participants found LLMs more useful in the “explain” phase than exploration, though a lot of LLM and dataviz training data relates to the exploration phase!

The Skills File Breakthrough: Multiple participants shared success using context files (Claude’s skills.md or claude.md) containing company style guides, design principles, and chart chooser decision trees.

LLMs as Code Generators: The clearest win was in reducing friction - generating boilerplate code for D3, Vega, ggplot, and web frameworks without manual coding.

The Judgment Gap: While LLMs excel at execution, they struggle with choosing appropriate visualizations. As one participant observed: “I feel like LLMs don’t have the instinct of how to communicate this information best... maybe it’s some kind of lack of empathy.”

Key topics discussed

1. Current use cases & productivity gains

Communication & Presentation:

  • Converting spreadsheet data into “intelligent interfaces” and rich visualizations
  • Creating modular chart components for PowerPoint and slide decks
  • One product manager built their own system using LLMs: “Data visualization more for communication of ideas on slides - should be able to give me a slide outline and tell me how to visually construct this slide”

Development Workflow:

  • Frontend development teams using LLMs for scrollytelling and interactive visualizations
  • Economics research labs accelerating design implementation
  • Students using video visualization features to create educational content with explanations

Problem Structuring:

  • Design students (non-coders) using LLMs to structure fuzzy problems into data problems
  • Creating hypothesis trees and mental models
  • Generating mind maps and conceptual frameworks

2. Practical strategies & tools

Context-Setting Approaches:

  • Encoding chart chooser decision trees from experts
  • Feeding design principles from Edward Tufte’s “Visualization of Quantitative Information”
  • Creating company-specific themes as JSON that LLMs can reference
  • One participant: “Take a photo of a chart chooser, encode this as an if-else decision tree”

Multi-LLM Workflows:
Sophisticated pipelines emerged as a pattern:

  • One LLM creates the story structure; Another refines and deep-dives
  • Final tools (Napkin.ai, Superset) generate polished visualizations
  • Using MCPs (Model Context Protocol) to connect different services

Specific Tools Mentioned:

  • Marimo: DAG-oriented notebook with integrated LLM for EDA
  • Cursor: Development environment with multiple LLM support
  • SuperSet: Open-source BI with LLM integration via MCP
  • Quarto: Markdown + Jupyter for literate programming
  • Custom GPTs: For repeated, specialized visualization tasks

3. The data quality challenge

The Core Problem: As one participant shared: “In my first month at my previous company, I made a beautiful dashboard based on bad data. No matter how much I told them it was based on bad data, they wouldn’t take it back.”

Multiple Dimensions of the Issue:

  • Business users creating visualizations without understanding data lineage
  • Dashboards showing wrong answers because data pipelines didn’t run
  • Beautiful charts masking fundamental data problems
  • Non-technical users believing outputs without validation

Attempted Solutions Discussed:

  • Programmatic data quality checks
  • Building semantic layers that track data freshness and lineage
  • LLMs checking if data is “fresh as of today” before generating visualizations
  • Multiple competing LLMs: one creates, one critiques, one defends

4. The design & communication challenge

Getting Design Principles Right:
One BI manager asked: “How do I communicate design knowledge to data engineers? People from tech backgrounds don’t see design as something that matters, but it matters a lot because that’s the front end.”

Information Compression:
A senior analyst highlighted: “Our senior stakeholders would like pure information and they would have just five minutes. From an engineer to a manager to a director, as information flows, it gets compressed. How do I put this information across in two minutes?”

The Response:

  • Use the same approach you’d use to teach humans: style guides, feedback, examples
  • Think about how theming works in ggplot or VegaLite - feed that structure to LLMs
  • But ultimately: “Getting people to care is not an AI problem, it’s a human problem. You have to educate through mentoring, feedback, in a very human one-to-one way.”

BOF image 2

Ideas that resonated

“Raise the Floor, Raise the Ceiling”: LLMs are closing the gap between expert and non-expert users. Non-experts no longer need to master entire tools - just understand concepts well enough to guide the LLM effectively.

The Sparring Partner Approach: Using ChatGPT not to generate charts, but to critique them. A data professional only uses it “as a sparring partner - it can be good at critiquing or giving ways to think and make better charts.”

Progressive Learning Through Memory: LLMs that remember user preferences improve over time: “The LLM knows your preference - oh this person when they say this, they actually need this.”

The Photography Analogy: Instagram democratized photography, creating millions of photos but devaluing professional photographers. The parallel helped frame expectations: “It’s gonna happen. You will have millions of crappy data viz. That’s how you learn from the crappy data viz.”

Iterative Improvement of Prompts: “LLMs are better at writing prompts than humans. In my company, all the prompts are written by LLMs - we get LLMs to write prompts.”

Common challenges & concerns

The trivialization effect

Data engineers feeling their work is undervalued: “Business folks dabble with data visualization, which is good, but the problem is they produce anything and believe it. The data engineering work gets trivialized - ‘we can also do it, why do you need so many resources?’”

Chart type selection

LLMs struggle without explicit guidance: “If I don’t specify a chart type, sometimes it will pick a chart type and I’ll look at it and be like, this is the worst kind to communicate this.”

The responsibility question

When discussing misleading visualizations, participants debated: “Is it the organization’s responsibility? The person making the chart? Should we have automated linters for data quality?” No clear consensus emerged.

BOF image 3

Conclusions & open questions

What’s working well

  • Code generation: Mature and reliable for standard visualization libraries
  • Style consistency: Skills files and context-setting produce consistent, on-brand outputs
  • Accessibility: Non-coders can now create sophisticated visualizations
  • Iteration speed: Rapid prototyping and refinement cycles
  • Critique and improvement: LLMs effectively identify weaknesses in existing visualizations

What needs development

  • Contextual judgment: Choosing appropriate visualizations for specific audiences
  • Data validation: Automated checking of data quality, freshness, and appropriateness
  • Auditability: Verifying legitimacy of LLM-generated insights
  • Exploration tools: Better support for the “explore” phase, not just “explain”

The human element

The strongest consensus: “This is fundamentally a human problem requiring education, mentorship, and organizational culture change - not just better AI tools.”

As one educator put it: “The hard part is getting people to care that design matters, that information needs to be compressed as you go up the ladder. Getting people to care is not something LLMs will solve. It happens through mentoring, guiding, feedback in a very human one-to-one way.”

Unresolved questions

  • Can we build effective “linters” for data visualizations similar to code linters?
  • How do we balance democratization with maintaining standards?
  • What’s the right division of responsibility between tool creators, organizations, and individuals?
  • How can we better support the exploration phase where problems are still fuzzy?
  • Should we use competing LLMs to validate each other’s outputs?

The session concluded with acknowledgment that while LLMs are powerful accelerators, the fundamental challenges of data literacy, design judgment, and organizational culture remain deeply human problems requiring human solutions.

Image credits: Guru Pratap volunteered to make pictures of BOF sessions at The Fifth Elephant Winter edition.

Comments

{{ gettext('Login to leave a comment') }}

{{ gettext('Post a comment…') }}
{{ gettext('New comment') }}
{{ formTitle }}

{{ errorMsg }}

{{ gettext('No comments posted yet') }}

Hybrid Access Ticket

Hosted by

Jumpstart better data engineering and AI futures

Supported by

Masterclass sponsorship

Thoughtworks is a global technology consultancy that integrates strategy, design and engineering to drive digital innovation.

Round table partners

Law and public policy firm with a sharp focus on tech and innovation