Insights

Google Cloud Next: Our Top 10 Announcement Highlights

Google Cloud Next is one of the biggest cloud data + AI events of the year, in which Google unveils how it is helping businesses, governments and users unlock new value from leading cloud technologies. This year, Google made 161 announcements surrounding updates to its software. The focus this year was on Generative AI, with updates to Vertex AI Model Garden, Imagen, and Foundational Models, due to its potential to revolutionise business operations and drastically enhance customer experience, across a range of industries.

While all of these are bound to have a significant impact on the tools and features that businesses have available for innovation, here are our top 10 highlights. 

 

Google Next Thomas Kurian

 

  1. Duet AI in Google Cloud, Looker + Workspace

Google’s conversational AI assistant, Duet AI, is coming to data products like Looker, BigQuery, and Dataplex, as well as Google Workspace. Duet AI’s new code generation and automation features will dramatically boost productivity for data engineers and analysts day-to-day. Getting contextual recommendations and generating entire code blocks from natural language prompts will significantly cut development time.

For Workspace, Duet AI will help users by automatically writing meeting notes, drafting emails, and developing entire Google Slides presentations, slashing hours of repetitive manual tasks, and allowing users to focus on what they need to do. 

“Duet AI will transform the way we interact with a variety of Google services improving productivity and effectiveness in several areas, ranging from generating docs and slides to summarising meetings. The new code assistance features will allow engineering teams to accelerate development which results in value being delivered at a faster rate.

Duet will also power a new generation of analytical services allowing the use of natural language to gain new and impactful insights” – Andy Harding, CTO

 

  1. New models in Vertex AI Model Garden

Google has added new models to Vertex AI Model Garden, including Meta’s Llama 2 and Code Llama and Technology Innovation Institute’s Falcon LLM, and pre-announced Anthropic’s Claude 2. 

This now means that Google Cloud has a curated collection of models across first-party, open-source, and third-parties, giving businesses a wide variety of starting points when developing their own custom use cases, while still enabling fine-tuning, deployment, and more through Google’s powerful Vertex AI platform.

 

  1. Imagen feature improvements including digital watermarking using SynthID

The updates to Google’s Imagen image generation model include improved visual appeal, image editing, captioning, a new tuning feature to align images to brand guidelines, and visual questions and answers, as well as digital watermarking functionality powered by Google DeepMind SynthID

Google Next

SynthID embeds a digital watermark directly into the pixels of an image, making it imperceptible to the human eye, but detectable for identification. With SynthID, intellectual property and copyright are much easier to manage, making sure that real photographs can be verified, while AI-generated images can be marked as such. This is part of Google’s wider Responsible AI initiative to ensure that AI, including Generative AI, is used ethically and safely.

“SynthID is a milestone in delivering trusted content to customers. We can’t ignore the importance of watermarking AI-generated content when following the recent improvements to Google Cloud’s Imagen models. SynthID will become a crucial tool for the news industry, media, and other content creators.” – Felix Schaumann, Machine Learning Engineering Lead

 

  1. BigQuery Studio

According to recent research, 81% of organisations have increased their data and analytics investments over the previous two years. To simplify this process, Google has developed BigQuery Studio, a new unified workspace in Google Cloud to enable businesses to streamline data workflows without switching between multiple tools. 

BigQuery Studio addresses common data challenges by bringing an end-to-end analytics experience in a single, purpose-built platform. Its unified workspace allows data engineers, data analysts and data scientists to perform end-to-end tasks such as data ingestion, pipeline creation, and predictive analytics, all in one place using the coding language of their choice. 

 

  1. The arrival of BigQuery data clean rooms

BigQuery data clean rooms are now available in preview, enabling organisations to create and manage secure environments for privacy-centric data sharing, analysis, and collaboration.

BigQuery data clean rooms will offer advanced security and privacy controls to help ensure that teams can conduct meaningful analyses while protecting their data.

“This is a huge step forward for BigQuery + Analytics Hub when it comes to unlocking value from data asset sharing. It will empower business growth and foster collaboration and innovation – whilst enabling enhanced regulatory compliance + privacy out-of-the-box! This is an important milestone towards Analytics Hub’s data monetisation capabilities, which will open lucrative revenue streams and expand market reach for organisations.” – Ash Sultan, Lead Data Architect

 

  1. Access to Vertex AI foundational models via BigQuery

PaLM enables businesses to use SQL in BigQuery ML to analyse unstructured data for advanced text processing tasks such as summarisation or sentiment analysis, and then retrieve results in a structured format. This can then be combined with other data for further analysis.

“BigQuery ML has established itself as the low-code ML service in Google Cloud. Especially with the addition of Foundational Models, BigQuery ML is democratising Generative AI on Google Cloud. For customers using BigQuery as a data warehouse, this enables many new AI use cases with the objective of creating an AI-powered Modern Data Stack.” – Felix Schaumann, Machine Learning Engineering Lead

 

  1. Enhanced support for open source formats within BigLake

BigLake unifies data lakes and warehouses to simplify data storage and break down data silos. To make this accessible for more customers, Google has announced enhanced support for open source formats such as Hudi and Delta Lake within BigLake, and added performance acceleration for Apache Iceberg

In the past six months, customer use of BigLake to combine data lake and warehouse workloads across clouds has grown 27x, and this increased support will enable even more businesses to benefit from having cleaner, more usable data all in one place. 

“Google’s commitment to open source is well-known. It has made significant technology available to the community through projects such as Kubernetes, Tensorflow and GoLang.

Through BigLake (and BigQuery Omni), Google is strengthening its commitment to open source and multi-cloud from the perspective of data warehousing. Through BigLake you can now analyse more open source formats allowing you to build open lakehouse architectures. With such a fantastic unified interface, BigLake continues to revolutionise how we build data platforms for analytics and insights” – Andy Harding, CTO

 

  1. Brand new Jump Start Solutions

Jump Start Solutions application and infrastructure solutions are now generally available. These are pre-built solution samples for various use cases that businesses can deploy directly from their Google Cloud console, before customising to develop an impactful use case.

Already, this includes deploying a data warehouse using BigQuery, deploying an AI/ML image processing pipeline, and deploying an ML model to summarise large documents. These solutions make it much faster and easier for businesses to get started once they have identified their next use case. 

 

  1. Google Maps has new APIs for solar, air quality and pollen

The Google Maps Platform team introduced a suite of Environment APIs for solar, air quality, and pollen, expanding what developers can do with Google Maps Platform to pave the way for a sustainable future. 

These APIs are enabling developers to better understand our rapidly-changing world so they can create new sustainability tools, share actionable insights, and help people adapt to new environmental changes through better planning. To find out more about how technology is being used to combat climate change, read our recent blog

“These new updates made by the Google Maps team allows analysts to leverage environmental data to drive decision-making. The Solar API allows solar installers and energy companies the ability to easily understand how much solar potential a roof has to identify sites that are most suitable for solar installation. The Air Quality API provides heat maps as well as details about the different types of pollutants, giving ​​councils and planners the ability to monitor air quality in near-real-time and make positive changes that impact the health of local residents.” – Stan Hill, Senior Cloud Architect and Sustainability Lead

 

  1. A new generation of Vertex AI Feature Store

A new generation of Vertex AI Feature Store, now built on BigQuery, is here to help businesses avoid data duplication and preserve data access policies. 

The new Feature Store natively supports vector embeddings, allowing teams to simplify the infrastructure required to store, manage and retrieve unstructured data in real-time.

Also, native BigQuery operations are fast and intuitive to use, not only simplifying MLOps pipelines but also allowing data scientists to experiment easily. Businesses can leverage custom vector embeddings for many applications, making operations more simple and efficient.

“A low-latency Feature Store is crucial when implementing streaming or near real-time AI + Machine Learning use cases. With the new generation of Feature Store in Vertex AI, BigQuery has become the go-to tool for storing, managing, and sharing features in your organisation. A cutting-edge MLOps platform on Google Cloud should natively use BigQuery for governing features from now on.” – Felix Schaumann, Machine Learning Engineering Lead

 

What’s next?

Google is leading the way within data + AI, and the latest announcements show that it will continue to enable businesses to innovate, become more efficient, and drive new business value. We look forward to seeing how these latest updates to Generative AI and other tooling with impact businesses to improve experiences, drive revenue, and boost efficiency. 

Register for our upcoming Generative AI webinar to find out more. 

Datatonic is Google Cloud’s Telecommunications Partner of the Year with a wealth of experience developing and deploying impactful data platforms, Machine Learning models and MLOps Platform builds. Need help with getting started or want to discuss your challenges? Get in touch today!

Related
View all
View all
Partner of the Year Awards
Insights
Datatonic Wins Four 2024 Google Cloud Partner of the Year Awards
Women in Data and Analytics
Insights
Coding Confidence: Inspiring Women in Data and Analytics
Prompt Engineering
Insights
Prompt Engineering 101: Using GenAI Effectively
Generative AI