Since its debut at the last AWS re:Invent, Amazon Q has evolved significantly, introducing new features that extend beyond automating content search, creation, and task management for knowledge workers, developers, and call center managers. In our article, Can Amazon Q Be to Business Users and Developers What Q Is to Bond?, we explored how Amazon Q integrates with AWS platforms such as QuickSight and Amazon Connect to enhance data-driven insights and customer support.
At AWS re:Invent 2024, AWS CEO Matt Garman, along with the leadership team, showcased the expanded capabilities of Amazon Q, with a particular emphasis on the Amazon Q Developer, which became generally available in April 2024. This module supports the entire software development life cycle—spanning coding, building, and operations—and has already demonstrated success across multiple use cases:
-
- Datapel, a warehouse management system provider, achieved a 70% improvement in efficiency using Amazon Q Developer. The tool accelerated feature deployment, streamlined task execution, and minimized repetitive processes.
- FINRA, a self-regulatory organization, reported a 20% improvement in code quality and integrity. This enhancement resulted in the development of more secure and high-performing software.
Amazon Q Developer capabilities extend to automating Java version upgrades, enabling faster and more efficient transitions from older Java versions to newer ones. It also focuses on accelerating complex migration tasks, such as transforming .NET applications from Windows to Linux and modernizing VMware workloads into cloud-native architectures, streamlining transformation tasks and reducing operational costs by 40%.
A notable success story involved Signature IT, a European leader in digital transactions. During early beta testing of Amazon Q Developer, the company modernized legacy .NET applications from Windows to Linux in just a few days—a process initially estimated to take 6-8 months.
Additionally, the platform prioritizes reducing mainframe migration timelines from years to mere quarters. According to the Avasant Application Modernization Services 2024 Market Insights™, nearly 80% of challenges in implementing modernization services stem from outdated legacy infrastructure. The shrinking pool of experienced talent and inadequate documentation and maintenance of legacy code further underscore the critical need for modernization solutions.
In fact, in the past 12 to 18 months, enterprise customers have shown great interest in leveraging generative AI (Gen AI) for multiple use cases in solution development across the application life cycle. Businesses are modernizing applications and enhancing legacy systems by integrating Gen AI-driven tools and leveraging cloud-native services to improve efficiency, scalability, and security.
Figure 1: Enterprise examples of various Gen AI-specific use cases across the application life cycle
The typical use cases include automation of code analysis, documentation of outdated codebases, creation of multiple scenarios to validate application performance, and automation of code refactoring to transform monolithic applications into agile, scalable microservices. Figure 1 above showcases some of the enterprise examples across various use cases.
Debuting the Nova Suite of Foundation Models
In our article published in February 2024, AWS Bets on Model Flexibility and Hardware Efficiency for a Distinctive Generative AI Advantage, we highlighted AWS’s distinctive approach to empowering enterprises with flexibility in foundation model selection. Unlike Microsoft and Google, which have doubled down on their proprietary and select open-source models, AWS has carved a unique niche by offering a model-agnostic philosophy through its Amazon Bedrock platform.
At the AWS re: Invent 2024, Andy Jassy, president and CEO of Amazon, announced a series of enhancements to its Bedrock platform, including Guardrails to block harmful content and standardize safety across Gen AI applications, multiagent collaboration to create and coordinate dedicated agents and execute complex workflows, Model Distillation to provide enterprises with smaller, faster and cost-effective models, and automated reasoning checks to avoid factual errors from LLM hallucinations. The following are a few success stories:
-
- Moody’s, a global provider of credit ratings and financial insights, has reported enhanced accuracy and quicker risk analysis by utilizing the multiagent capabilities of Amazon Bedrock. It uses the platform to create dedicated tasks for each agent and facilitates multiagent collaboration to synthesize the outputs into insights.
- Robin AI, a legal AI assistant company, has enhanced its interpretation accuracy across contract clauses and legal Q&A capabilities by leveraging the model distillation feature of Amazon Bedrock. This resulted in faster response times, making complex legal processes quicker, cost-effective, and more accessible.
Interestingly, AWS has launched its proprietary foundation model, Amazon Nova, a suite of models–Micro, Lite, Pro, and Premier–which can accept text, image, or video inputs and generate text output. These models can handle a wide range of tasks across 200 languages, from text generation to multimodal content creation, offering organizations the flexibility to address diverse use cases. The Amazon Nova models are at least 75% more cost-effective than other models in their categories on Amazon Bedrock while also being among the fastest. Integration with Bedrock Knowledge Bases enhances output reliability and relevance, equipping enterprises with accurate, data-driven decision-making tools. Additionally, it introduced Amazon Nova Canvas and Amazon Nova Reel to extend Amazon’s capabilities into image and video generation, respectively, for diverse applications such as advertising and marketing. Amazon Nova Canvas enables the creation of images from text or images provided in prompts, while Amazon Nova Reel generates video from text and images. It also plans to launch the Amazon Nova speech-to-speech model and the Amazon Nova any-to-any multimodal model by 2025.
Multimodal AI use cases are being explored by 80% of enterprises across sectors, offering a massive opportunity to enhance productivity and gain a competitive edge. In our recently launched Generative AI Platforms Q2 2024 RadarView™, we showcased how customers are exploring multimodal AI use cases across various industries, as shown in Figure 2. From intelligent health assistants, spatial data analytics, and clinical trial monitoring to fraud detection, multimodal capabilities are enhancing customer experiences with hyper-personalized recommendations.
Figure 2: Key multimodal AI use cases across various sectors
As customers explore various multimodal models, it is crucial to keep several key considerations in mind, such as ROI evaluation, data privacy and protection, incorporating human-in-the-loop processes, and improving accuracy. These aspects are discussed in detail in our article, Harnessing Multimodal AI: Innovations and Applications. By addressing these factors, organizations can maximize the potential of multimodal AI while ensuring robust and responsible implementation.
By Gaurav Dewan, Research Director, and Dhanusha Ramakrishnan, Lead Analyst, Avasant