The AI everything show continues at AWS: Generate SQL from text, vector search, and more

Invisible watermarks on AI-generated images? Sure. But major tools in the stack matter most

re:Invent Another day at AWS re:Invent and yet more talk of artifical intelligence dominated, with a senior executive taking to the stage to wax lyrical about the impact of vector databases on the tech and more.

Dr Swami Sivasubramanian, AWS VP of Data and AI, gave the official AI keynote at re:Invent in Las Vegas, a day after AWS CEO Adam Selipsky also spoke mostly about AI.

Dr. Swami Sivasubramanian, VP of Data and AI, described the AWS Generative AI Stack

Sivasubramanian, VP of Data and AI, describes the AWS Generative AI Stack

Sivasubramanian gave the database perspective, telling attendees that high quality AI results depend on high quality data, and showing off features, including the ability to generate SQL from text input for Amazon Redshift, (a data warehouse service), and the addition of vector search to database managers including OpenSearch serverless (generally available), MemoryDB for Redis (preview), DocumentDB (generally available), and coming soon for Amazon Aurora and MongoDB. Vector Search is also available for PostgreSQL via the pgvector extension.

So why all the activity around vector search? "Vector embeddings are produced by foundational models, which translate text inputs like words, phrases, or large units of text in numerical representations," said Sivasubramanian. "Vectors allow your models to more easily find the relationship between similar words, for instance, a cat is closer to a kitten, or a dog is closer to a pup."

In other words, adding vector search to a database manager improves its suitability for generative AI. Sivasubramanian said AWS is working to "add vector capabilities across our portfolio," so we should expect more of this.

Sivasubramanian also added a database bent to on Amazon Q, the AI-driven assistant presented the day before by Selipsky. He previewed a new feature for the Redshift query editor called Amazon Q generative SQL. The user explains what results they want and Amazon Q generates the SQL. The example given was rather basic though, the kind of thing a DBA (database administrator) or developer could likely easily write for themselves. There may be a pattern here, that AI will help more with drudgery than with advanced work; but it is early days.

Responsible AI

Sivasubramanian previewed a new feature of Titan, a family of models which is exclusive to the AWS Bedrock tool, called Titan Image Generator. He began with a prompt for an image of an Iguana, then modified it by asking for a rainforest background. “You can use the model seamlessly [to] swap out backgrounds to generate lifestyle images, all while retaining the main subject of the image,” he said.

There are some obvious use cases for this. For example, on ecommerce sites where users can view products in a personalized context, and we later saw an example where a woman is refitting her home. It all felt sanitized though, and it is easy to think of cases where AI generated images could be used in a deceptive manner.

AWS was one of several companies to meet with the White House earlier this year to discuss responsible AI and made a number of voluntary commitments. One disclosed by Sivasubramanian is that to “promote the responsible development of AI technology, all Titan generated images come with an invisible watermark. These watermarks are designed to help reduce the spread of misinformation by providing a discrete mechanism to identify AI generated images.” These watermarks are “designed to be resistant to alterations,” the press release states.

Another feature, called Guardrails for Amazon Bedrock, "helps customers implement safeguards customized to their generative AI applications and aligned with their responsible AI policies."

The snag is that developers who do not care about Guardrails will not implement the safeguards, and it is equally unlikely that watermarks will be a robust solution.

AWS has taken the position that AI-driven applications will become the norm in many areas. A slide shown here frequently, with variations, shows what AWS calls the Generative AI stack. At the bottom is the infrastructure: GPUs, Trainium and Inferentia specialist chips, Nitro accelerated networking and so on. Also put in this category by Sivasubramanian is SageMaker, a online IDE for building custom models, or deploying pre-trained models.

Next come the tools, and in particular Bedrock, a managed service which offers a choice of Foundation Models (now including Claude 2.1, the latest from AWS close AI partner Anthropic). Bedrock also supports a feature called Retrieval Augmented Generation (RAG), which enables the model to include contextual data, and further features called fine-tuning and continued pre-training, which keeps the model up to date and adapts it to a specific industry or organization.

At the top of the stack are tools like Amazon Q and CodeWhisperer. These do not require the user to know about AI but give AI-driven assistance.

Although Amazon Q is perhaps the prominent launch at re:Invent, it is the tools and infrastructure parts of this stack that matter more. ®

Also at Re:Invent

Amazon, as usual, has announced a truckload of stuff for its annual cloud conference, held this year in Las Vegas. We covered its Q chat assistant here; AWS's suggestion of a cell-based architecture here; CodeWhisperer updates here; the launch of its WorkSpaces Thin Client here; improvements to S3 here; SDKs for Rust and Kotlin right here; the custom-designed Graviton4 and Trainium2 processors here; and its latest direction with AppFabric here along with a summary of other news.

In the meantime, here's some other bits and pieces you might want to know about:

  • SageMaker Studio, mentioned above, has had some updates including a new web-based interface and a Code Editor based on Microsoft's open-source Visual Studio Code.
  • Security tool Amazon Inspector has gained three more workload-scanning capabilities, including the ability to monitor EC2 instances without installing extra code.
  • We're told vector engine for Amazon OpenSearch Serverless is now generally available as is vector search for Amazon DocumentDB (with MongoDB compatibility). And vector search for Amazon MemoryDB for Redis is now being teased in preview.
  • Support for Apache Iceberg tables is now generally available in Amazon Redshift.

You can find Amazon's roundup of its announcements here, and a big ol' list here.

More about

TIP US OFF

Send us news


Other stories you might like