Amazon Web Services (AWS) has announced a significant investment of $100 million to establish the AWS Generative AI Innovation Center. This initiative prioritises healthcare and life sciences, along with other industries such as Financial Services, Energy & Utilities, and Telco.
The AWS Generative AI Innovation Center will serve as a platform connecting AWS AI and machine learning (ML) experts with customers worldwide. Its primary objective is to assist customers in envisioning, designing, and launching innovative generative AI products, services, and processes.
Comprising a team of strategists, data scientists, engineers, and solutions architects, the AWS Generative AI Innovation Center will collaborate closely with customers to develop tailored solutions that harness the potential of generative AI. For instance, healthcare and life sciences companies can explore ways to accelerate drug research and discovery, manufacturers can reinvent industrial design and processes, and financial services companies can offer customers more personalised information and advice.
To facilitate this collaboration, AWS will provide customers with workshops, engagements, and training at no cost. These initiatives will help customers identify and define use cases that deliver the greatest value to their businesses, drawing upon best practices and industry expertise. Customers will work hand in hand with AWS generative AI experts and the AWS Partner Network to select appropriate models, overcome technical or business challenges, develop proofs of concepts, and establish plans for scaling solutions.
The Generative AI Innovation Center team will also offer guidance on responsible generative AI practices and optimising machine learning operations to reduce costs. Engagements will provide customers with strategies, tools, and support for utilising AWS generative AI services. These services include Amazon CodeWhisperer, an AI-powered coding companion, and Amazon Bedrock, a fully managed service that grants access to foundational models (FMs) from AI21 Labs, Anthropic, Stability AI, as well as Amazon's own family of FMs (Amazon Titan) via an API.
Customers can train and run their models using high-performance infrastructure, such as AWS Inferentia-powered Amazon EC2 Inf1 Instances, AWS Trainium-powered Amazon EC2 Trn1 Instances, and Amazon EC2 P5 instances powered by NVIDIA H100 Tensor Core GPUs. Moreover, customers have the option to build, train, and deploy their own models using Amazon SageMaker or utilise Amazon SageMaker Jumpstart to deploy popular FMs like Cohere's large language models, Technology Innovation Institute's Falcon 40B, and Hugging Face's BLOOM.
Click here to read the original news story.