Generative AI has become a key driver for change in every IT organization today. At AMD, we have successfully deployed production use AI applications with executive support from our CIO, the Business, and our Engineering leadership. In this talk, we will describe how we adopted GenAI into our AI stacks. We will share our experience in selecting the right hardware/software/partners, skilling our teams for GenAI, and success metrics.
AI has already transformed our lives with Machine Learning. Classic ML continues to be a critical infrastructure for reliable, stable solutions with predictive power in many areas that exceed human level.
Generative AI, specifically transformer based LLMs, have exponentially accelerated our ability to use artificial intelligence in areas where problems are conversational, exploratory, creative, and assistance to fact based reasoning by humans. The key change is the ability for talent of all types, not just technologists and scientists, to be able to access AI in simple ways. This is a shift that can only get better with our collective efforts.
There is a worldwide energy and innovation that is focused not only on mastery of these new AI techniques, but also making it more accessible to people who do not have bleeding edge resources.
How did we move our AI needle?
- Recognition of the shift - all levels of organization
- Executive vision and accelerated support for initiatives
- Framework for evaluating AI projects (ML/AI decision making process)
- Partner selection, Skilling, Training
- Metrics for success
- A working GenAI stack based on SaaS and responsible-AI compliance
- Building your private GenAI stack for specific internal use cases
- Deployments of LLM, VectorDB, API gateways and Fine-tuning/RAG toolset
- Patterns for Successful use cases
- Handling Challenging use cases
- How we can help you
{{ gettext('Login to leave a comment') }}
{{ gettext('Post a comment…') }}{{ errorMsg }}
{{ gettext('No comments posted yet') }}