♊️ GemiNews 🗞️ (dev)

Demo 1: Embeddings + Recommendation Demo 2: Bella RAGa Demo 3: NewRetriever Demo 4: Assistant function calling

🗞️Vertex Search and Conversation

🗿Semantically Similar Articles (by :title_embedding)

Vertex Search and Conversation

2024-04-08 - Anita Gutta (from Google Cloud - Medium)

Vertex AI Search is a fully-managed platform, powered by large language models, that lets you build AI-enabled search and recommendation experiences for your public or private websites or mobile applications. For a thorough understanding of the methodology and procedures associated with the creation, deployment, maintenance, and monitoring of a Vertex AI Search instance, please refer to the Vertex AI Search and Conversation documentation.There are many options to DIY a Search/Chat bot … BUT Demos are easy, Production is hard.The following are the steps involved in constructing a Retrieval-Augmented Generation (RAG) search application:The utilization of managed Vertex AI Search by Google Cloud considerably streamlines this process.Follow the simple steps to build your datastores and engines, then load and query the data in GCP Vertex AI Search and Conversation platform.What we are going to BuildDatastores and Engines from TerraformLoad data and query from pythonStep 1:# Create Test DataStoreresource "google_discovery_engine_data_store" "test-ds" { location = "global" data_store_id = "test-data-store-id" display_name = "test-unstructured-datastore" industry_vertical = "GENERIC" content_config = "CONTENT_REQUIRED" solution_types = ["SOLUTION_TYPE_SEARCH"] create_advanced_site_search = false project = <PROJECT ID>}# Create Test Serach Engineresource "google_discovery_engine_search_engine" "test-engine" { engine_id = "test_engine_id" collection_id = "default_collection" location = google_discovery_engine_data_store.test-ds.location display_name = "test-engine" industry_vertical = "GENERIC" data_store_ids = [google_discovery_engine_data_store.test-ds.data_store_id] common_config { company_name = "Test Company" }search_engine_config { search_add_ons = ["SEARCH_ADD_ON_LLM"] }project = <PROJECT ID>}# Output resource Id'soutput "test_data_store_id" { value =resource.google_discovery_engine_data_store.test-ds.data_store_id}output "test_engine_id" { value = resource.google_discovery_engine_search_engine.test-engine.engine_id}RunPlugin project ID in above terraform code and run below commandsterraform initterraform planterraform applyIf you running terraform with your own GCP identity you will run into an error like below.Error: Error creating DataStore: googleapi: Error 403: Your application is authenticating by using local Application Default Credentials. The discoveryengine.googleapis.com API requires a quota project, which is not set by default. To learn how to set your quota project, see https://cloud.google.com/docs/authentication/adc-troubleshooting/user-creds .Add provider block to specify billing project. billing project and project where datastores are created can be same.provider "google" { project = <PROJECT ID> // Your actual GCP project ID user_project_override = true billing_project = <BILLING PROJECT ID> // The project used for quota}Once terraform is successful you can login to GCP Console -> Vertex Search and ConversationConfirm an App named “test-engine” is createdConfirm an empty datastore named “test-unstructured-datastore” is createdStep 2:Create load_data.py with below contents. Plug in project Id and Datastore ID (output from terraform)from google.cloud import storagefrom google.api_core.client_options import ClientOptionsfrom google.cloud import discoveryengine_v1alpha as discoveryenginedef import_documents( project_id: str, location: str, data_store_id: str, gcs_uri: str,): # Create a client client_options = ( ClientOptions( api_endpoint=f"{location}-discoveryengine.googleapis.com") if location != "global" else None ) client = discoveryengine.DocumentServiceClient( client_options=client_options) # The full resource name of the search engine branch. # e.g. projects/{project}/locations/{location}/dataStores/{data_store_id}/branches/{branch} parent = client.branch_path( project=project_id, location=location, data_store=data_store_id, branch="default_branch", ) source_documents = [f"{gcs_uri}/*"] request = discoveryengine.ImportDocumentsRequest( parent=parent, gcs_source=discoveryengine.GcsSource( input_uris=source_documents, data_schema="content" ), # Options: `FULL`, `INCREMENTAL` reconciliation_mode=discoveryengine.ImportDocumentsRequest.ReconciliationMode.INCREMENTAL, ) # Make the request operation = client.import_documents(request=request) response = operation.result() # Once the operation is complete, # get information from operation metadata metadata = discoveryengine.ImportDocumentsMetadata(operation.metadata) # Handle the response return operation.operation.namesource_documents_gs_uri = ( "gs://cloud-samples-data/gen-app-builder/search/alphabet-investor-pdfs")PROJECT_ID = <PROJECT ID>DATASTORE_ID = <DATASTORE ID>LOCATION = "global"print(" Starting loading data into datastore")import_documents(PROJECT_ID, LOCATION, DATASTORE_ID, source_documents_gs_uri)print(" Completed loading data into datastore")Create search_data.py with below contents. Plug in project Id and Datastore ID (output from terraform)from google.cloud import storagefrom google.api_core.client_options import ClientOptionsfrom google.cloud import discoveryengine_v1alpha as discoveryenginefrom typing import Listdef search_sample( project_id: str, location: str, data_store_id: str, search_query: str,) -> List[discoveryengine.SearchResponse]: # For more information, refer to: # https://cloud.google.com/generative-ai-app-builder/docs/locations#specify_a_multi-region_for_your_data_store client_options = ( ClientOptions( api_endpoint=f"{location}-discoveryengine.googleapis.com") if LOCATION != "global" else None ) # Create a client client = discoveryengine.SearchServiceClient(client_options=client_options) # The full resource name of the search engine serving config # e.g. projects/{project_id}/locations/{location}/dataStores/{data_store_id}/servingConfigs/{serving_config_id} serving_config = client.serving_config_path( project=project_id, location=location, data_store=data_store_id, serving_config="default_config", ) # Optional: Configuration options for search # Refer to the `ContentSearchSpec` reference for all supported fields: # https://cloud.google.com/python/docs/reference/discoveryengine/latest/google.cloud.discoveryengine_v1.types.SearchRequest.ContentSearchSpec content_search_spec = discoveryengine.SearchRequest.ContentSearchSpec( # For information about snippets, refer to: # https://cloud.google.com/generative-ai-app-builder/docs/snippets snippet_spec=discoveryengine.SearchRequest.ContentSearchSpec.SnippetSpec( return_snippet=True ), # For information about search summaries, refer to: # https://cloud.google.com/generative-ai-app-builder/docs/get-search-summaries summary_spec=discoveryengine.SearchRequest.ContentSearchSpec.SummarySpec( summary_result_count=5, include_citations=True, ignore_adversarial_query=True, ignore_non_summary_seeking_query=True, ), ) # Refer to the `SearchRequest` reference for all supported fields: # https://cloud.google.com/python/docs/reference/discoveryengine/latest/google.cloud.discoveryengine_v1.types.SearchRequest request = discoveryengine.SearchRequest( serving_config=serving_config, query=search_query, page_size=10, content_search_spec=content_search_spec, query_expansion_spec=discoveryengine.SearchRequest.QueryExpansionSpec( condition=discoveryengine.SearchRequest.QueryExpansionSpec.Condition.AUTO, ), spell_correction_spec=discoveryengine.SearchRequest.SpellCorrectionSpec( mode=discoveryengine.SearchRequest.SpellCorrectionSpec.Mode.AUTO ), ) response = client.search(request) return responseQUERY = "Who is the CEO of Google?"PROJECT_ID = <PROJECT ID>DATASTORE_ID = <DATASTORE ID>LOCATION = "global"print(search_sample(PROJECT_ID, LOCATION, DATASTORE_ID, QUERY).summary.summary_text)Output of above python script should print the summary for the query.Do create datastores and engines in python refer to this Github ResourceVertex Search and Conversation was originally published in Google Cloud - Community on Medium, where people are continuing the conversation by highlighting and responding to this story.

[Blogs] 🌎 https://medium.com/google-cloud/vertex-search-and-conversation-364cdc591167?source=rss----e52cf94d98af---4 [🧠] [v2] article_embedding_description: {:llm_project_id=>"Unavailable", :llm_dimensions=>nil, :article_size=>11121, :llm_embeddings_model_name=>"textembedding-gecko"}
[🧠] [v1/3] title_embedding_description: {:ricc_notes=>"[embed-v3] Fixed on 9oct24. Only seems incompatible at first glance with embed v1.", :llm_project_id=>"unavailable possibly not using Vertex", :llm_dimensions=>nil, :article_size=>11121, :poly_field=>"title", :llm_embeddings_model_name=>"textembedding-gecko"}
[🧠] [v1/3] summary_embedding_description:
[🧠] As per bug https://github.com/palladius/gemini-news-crawler/issues/4 we can state this article belongs to titile/summary version: v3 (very few articles updated on 9oct24)

🗿article.to_s

------------------------------
Title: Vertex Search and Conversation
[content]
Vertex AI Search is a fully-managed platform, powered by large language models, that lets you build AI-enabled search and recommendation experiences for your public or private websites or mobile applications. For a thorough understanding of the methodology and procedures associated with the creation, deployment, maintenance, and monitoring of a Vertex AI Search instance, please refer to the Vertex AI Search and Conversation documentation.There are many options to DIY a Search/Chat bot … BUT Demos are easy, Production is hard.The following are the steps involved in constructing a Retrieval-Augmented Generation (RAG) search application:The utilization of managed Vertex AI Search by Google Cloud considerably streamlines this process.Follow the simple steps to build your datastores and engines, then load and query the data in GCP Vertex AI Search and Conversation platform.What we are going to BuildDatastores and Engines from TerraformLoad data and query from pythonStep 1:# Create Test DataStoreresource "google_discovery_engine_data_store" "test-ds" {  location = "global"  data_store_id = "test-data-store-id"  display_name = "test-unstructured-datastore"  industry_vertical = "GENERIC"  content_config = "CONTENT_REQUIRED"  solution_types = ["SOLUTION_TYPE_SEARCH"]  create_advanced_site_search = false  project = <PROJECT ID>}# Create Test Serach Engineresource "google_discovery_engine_search_engine" "test-engine" {  engine_id = "test_engine_id"  collection_id = "default_collection"  location = google_discovery_engine_data_store.test-ds.location  display_name = "test-engine"  industry_vertical = "GENERIC"  data_store_ids = [google_discovery_engine_data_store.test-ds.data_store_id]  common_config {  company_name = "Test Company"  }search_engine_config {  search_add_ons = ["SEARCH_ADD_ON_LLM"]  }project = <PROJECT ID>}# Output resource Id'soutput "test_data_store_id" {  value =resource.google_discovery_engine_data_store.test-ds.data_store_id}output "test_engine_id" {  value = resource.google_discovery_engine_search_engine.test-engine.engine_id}RunPlugin project ID in above terraform code and run below commandsterraform initterraform planterraform applyIf you running terraform with your own GCP identity you will run into an error like below.Error: Error creating DataStore: googleapi: Error 403: Your application is authenticating by using local Application Default Credentials. The discoveryengine.googleapis.com API requires a quota project, which is not set by default. To learn how to set your quota project, see https://cloud.google.com/docs/authentication/adc-troubleshooting/user-creds .Add provider block to specify billing project. billing project and project where datastores are created can be same.provider "google" {  project         = <PROJECT ID>  // Your actual GCP project ID  user_project_override = true   billing_project = <BILLING PROJECT ID> // The project used for quota}Once terraform is successful you can login to GCP Console -> Vertex Search and ConversationConfirm an App named “test-engine” is createdConfirm an empty datastore named “test-unstructured-datastore” is createdStep 2:Create load_data.py with below contents. Plug in project Id and Datastore ID (output from terraform)from google.cloud import storagefrom google.api_core.client_options import ClientOptionsfrom google.cloud import discoveryengine_v1alpha as discoveryenginedef import_documents(    project_id: str,    location: str,    data_store_id: str,    gcs_uri: str,):    # Create a client    client_options = (        ClientOptions(            api_endpoint=f"{location}-discoveryengine.googleapis.com")        if location != "global"        else None    )    client = discoveryengine.DocumentServiceClient(        client_options=client_options)    # The full resource name of the search engine branch.    # e.g. projects/{project}/locations/{location}/dataStores/{data_store_id}/branches/{branch}    parent = client.branch_path(        project=project_id,        location=location,        data_store=data_store_id,        branch="default_branch",    )    source_documents = [f"{gcs_uri}/*"]    request = discoveryengine.ImportDocumentsRequest(        parent=parent,        gcs_source=discoveryengine.GcsSource(            input_uris=source_documents, data_schema="content"        ),        # Options: `FULL`, `INCREMENTAL`        reconciliation_mode=discoveryengine.ImportDocumentsRequest.ReconciliationMode.INCREMENTAL,    )    # Make the request    operation = client.import_documents(request=request)    response = operation.result()    # Once the operation is complete,    # get information from operation metadata    metadata = discoveryengine.ImportDocumentsMetadata(operation.metadata)    # Handle the response    return operation.operation.namesource_documents_gs_uri = (    "gs://cloud-samples-data/gen-app-builder/search/alphabet-investor-pdfs")PROJECT_ID = <PROJECT ID>DATASTORE_ID = <DATASTORE ID>LOCATION = "global"print(" Starting loading data into datastore")import_documents(PROJECT_ID, LOCATION, DATASTORE_ID, source_documents_gs_uri)print(" Completed loading data into datastore")Create search_data.py with below contents. Plug in project Id and Datastore ID (output from terraform)from google.cloud import storagefrom google.api_core.client_options import ClientOptionsfrom google.cloud import discoveryengine_v1alpha as discoveryenginefrom typing import Listdef search_sample(    project_id: str,    location: str,    data_store_id: str,    search_query: str,) -> List[discoveryengine.SearchResponse]:    #  For more information, refer to:    # https://cloud.google.com/generative-ai-app-builder/docs/locations#specify_a_multi-region_for_your_data_store    client_options = (        ClientOptions(            api_endpoint=f"{location}-discoveryengine.googleapis.com")        if LOCATION != "global"        else None    )    # Create a client    client = discoveryengine.SearchServiceClient(client_options=client_options)    # The full resource name of the search engine serving config    # e.g. projects/{project_id}/locations/{location}/dataStores/{data_store_id}/servingConfigs/{serving_config_id}    serving_config = client.serving_config_path(        project=project_id,        location=location,        data_store=data_store_id,        serving_config="default_config",    )    # Optional: Configuration options for search    # Refer to the `ContentSearchSpec` reference for all supported fields:    # https://cloud.google.com/python/docs/reference/discoveryengine/latest/google.cloud.discoveryengine_v1.types.SearchRequest.ContentSearchSpec    content_search_spec = discoveryengine.SearchRequest.ContentSearchSpec(        # For information about snippets, refer to:        # https://cloud.google.com/generative-ai-app-builder/docs/snippets        snippet_spec=discoveryengine.SearchRequest.ContentSearchSpec.SnippetSpec(            return_snippet=True        ),        # For information about search summaries, refer to:        # https://cloud.google.com/generative-ai-app-builder/docs/get-search-summaries        summary_spec=discoveryengine.SearchRequest.ContentSearchSpec.SummarySpec(            summary_result_count=5,            include_citations=True,            ignore_adversarial_query=True,            ignore_non_summary_seeking_query=True,        ),    )    # Refer to the `SearchRequest` reference for all supported fields:    # https://cloud.google.com/python/docs/reference/discoveryengine/latest/google.cloud.discoveryengine_v1.types.SearchRequest    request = discoveryengine.SearchRequest(        serving_config=serving_config,        query=search_query,        page_size=10,        content_search_spec=content_search_spec,        query_expansion_spec=discoveryengine.SearchRequest.QueryExpansionSpec(            condition=discoveryengine.SearchRequest.QueryExpansionSpec.Condition.AUTO,        ),        spell_correction_spec=discoveryengine.SearchRequest.SpellCorrectionSpec(            mode=discoveryengine.SearchRequest.SpellCorrectionSpec.Mode.AUTO        ),    )    response = client.search(request)    return responseQUERY = "Who is the CEO of Google?"PROJECT_ID = <PROJECT ID>DATASTORE_ID = <DATASTORE ID>LOCATION = "global"print(search_sample(PROJECT_ID, LOCATION, DATASTORE_ID, QUERY).summary.summary_text)Output of above python script should print the summary for the query.Do create datastores and engines in python refer to this Github ResourceVertex Search and Conversation was originally published in Google Cloud - Community on Medium, where people are continuing the conversation by highlighting and responding to this story.
[/content]

Author: Anita Gutta
PublishedDate: 2024-04-08
Category: Blogs
NewsPaper: Google Cloud - Medium
Tags: google-cloud-platform, gcp-chat-bots, conversational-ai, machine-learning, vertex-search
{"id"=>9341,
"title"=>"Vertex Search and Conversation",
"summary"=>nil,
"content"=>"

Vertex AI Search is a fully-managed platform, powered by large language models, that lets you build AI-enabled search and recommendation experiences for your public or private websites or mobile applications. For a thorough understanding of the methodology and procedures associated with the creation, deployment, maintenance, and monitoring of a Vertex AI Search instance, please refer to the Vertex AI Search and Conversation documentation.

There are many options to DIY a Search/Chat bot … BUT Demos are easy, Production is hard.

\"\"

The following are the steps involved in constructing a Retrieval-Augmented Generation (RAG) search application:

\"\"

The utilization of managed Vertex AI Search by Google Cloud considerably streamlines this process.

\"\"

Follow the simple steps to build your datastores and engines, then load and query the data in GCP Vertex AI Search and Conversation platform.

What we are going to Build

  1. Datastores and Engines from Terraform
  2. Load data and query from python

Step 1:

# Create Test DataStore
resource "google_discovery_engine_data_store" "test-ds" {
location = "global"
data_store_id = "test-data-store-id"
display_name = "test-unstructured-datastore"
industry_vertical = "GENERIC"
content_config = "CONTENT_REQUIRED"
solution_types = ["SOLUTION_TYPE_SEARCH"]
create_advanced_site_search = false
project = <PROJECT ID>
}

# Create Test Serach Engine
resource "google_discovery_engine_search_engine" "test-engine" {
engine_id = "test_engine_id"
collection_id = "default_collection"
location = google_discovery_engine_data_store.test-ds.location
display_name = "test-engine"
industry_vertical = "GENERIC"
data_store_ids = [google_discovery_engine_data_store.test-ds.data_store_id]
common_config {
company_name = "Test Company"
}
search_engine_config {
search_add_ons = ["SEARCH_ADD_ON_LLM"]
}
project = <PROJECT ID>
}

# Output resource Id's
output "test_data_store_id" {
value =resource.google_discovery_engine_data_store.test-ds.data_store_id
}
output "test_engine_id" {
value = resource.google_discovery_engine_search_engine.test-engine.engine_id
}

Run

Plugin project ID in above terraform code and run below commands

terraform init
terraform plan
terraform apply

If you running terraform with your own GCP identity you will run into an error like below.

Error: Error creating DataStore: googleapi: Error 403: 
Your application is authenticating by using local Application Default
Credentials. The discoveryengine.googleapis.com API requires a quota project,
which is not set by default. To learn how to set your quota project,
see https://cloud.google.com/docs/authentication/adc-troubleshooting/user-creds .

Add provider block to specify billing project. billing project and project where datastores are created can be same.

provider "google" {
project = <PROJECT ID> // Your actual GCP project ID
user_project_override = true
billing_project = <BILLING PROJECT ID> // The project used for quota
}

Once terraform is successful you can login to GCP Console -> Vertex Search and Conversation

  1. Confirm an App named “test-engine” is created
  2. Confirm an empty datastore named “test-unstructured-datastore” is created

Step 2:

Create load_data.py with below contents. Plug in project Id and Datastore ID (output from terraform)


from google.cloud import storage
from google.api_core.client_options import ClientOptions
from google.cloud import discoveryengine_v1alpha as discoveryengine


def import_documents(
project_id: str,
location: str,
data_store_id: str,
gcs_uri: str,
):
# Create a client
client_options = (
ClientOptions(
api_endpoint=f"{location}-discoveryengine.googleapis.com")
if location != "global"
else None
)
client = discoveryengine.DocumentServiceClient(
client_options=client_options)

# The full resource name of the search engine branch.
# e.g. projects/{project}/locations/{location}/dataStores/{data_store_id}/branches/{branch}
parent = client.branch_path(
project=project_id,
location=location,
data_store=data_store_id,
branch="default_branch",
)

source_documents = [f"{gcs_uri}/*"]

request = discoveryengine.ImportDocumentsRequest(
parent=parent,
gcs_source=discoveryengine.GcsSource(
input_uris=source_documents, data_schema="content"
),
# Options: `FULL`, `INCREMENTAL`
reconciliation_mode=discoveryengine.ImportDocumentsRequest.ReconciliationMode.INCREMENTAL,
)

# Make the request
operation = client.import_documents(request=request)

response = operation.result()

# Once the operation is complete,
# get information from operation metadata
metadata = discoveryengine.ImportDocumentsMetadata(operation.metadata)

# Handle the response
return operation.operation.name


source_documents_gs_uri = (
"gs://cloud-samples-data/gen-app-builder/search/alphabet-investor-pdfs"
)

PROJECT_ID = <PROJECT ID>
DATASTORE_ID = <DATASTORE ID>
LOCATION = "global"
print(" Starting loading data into datastore")
import_documents(PROJECT_ID, LOCATION, DATASTORE_ID, source_documents_gs_uri)
print(" Completed loading data into datastore")

Create search_data.py with below contents. Plug in project Id and Datastore ID (output from terraform)

from google.cloud import storage
from google.api_core.client_options import ClientOptions
from google.cloud import discoveryengine_v1alpha as discoveryengine
from typing import List


def search_sample(
project_id: str,
location: str,
data_store_id: str,
search_query: str,
) -> List[discoveryengine.SearchResponse]:
# For more information, refer to:
# https://cloud.google.com/generative-ai-app-builder/docs/locations#specify_a_multi-region_for_your_data_store
client_options = (
ClientOptions(
api_endpoint=f"{location}-discoveryengine.googleapis.com")
if LOCATION != "global"
else None
)

# Create a client
client = discoveryengine.SearchServiceClient(client_options=client_options)

# The full resource name of the search engine serving config
# e.g. projects/{project_id}/locations/{location}/dataStores/{data_store_id}/servingConfigs/{serving_config_id}
serving_config = client.serving_config_path(
project=project_id,
location=location,
data_store=data_store_id,
serving_config="default_config",
)

# Optional: Configuration options for search
# Refer to the `ContentSearchSpec` reference for all supported fields:
# https://cloud.google.com/python/docs/reference/discoveryengine/latest/google.cloud.discoveryengine_v1.types.SearchRequest.ContentSearchSpec
content_search_spec = discoveryengine.SearchRequest.ContentSearchSpec(
# For information about snippets, refer to:
# https://cloud.google.com/generative-ai-app-builder/docs/snippets
snippet_spec=discoveryengine.SearchRequest.ContentSearchSpec.SnippetSpec(
return_snippet=True
),
# For information about search summaries, refer to:
# https://cloud.google.com/generative-ai-app-builder/docs/get-search-summaries
summary_spec=discoveryengine.SearchRequest.ContentSearchSpec.SummarySpec(
summary_result_count=5,
include_citations=True,
ignore_adversarial_query=True,
ignore_non_summary_seeking_query=True,
),
)

# Refer to the `SearchRequest` reference for all supported fields:
# https://cloud.google.com/python/docs/reference/discoveryengine/latest/google.cloud.discoveryengine_v1.types.SearchRequest
request = discoveryengine.SearchRequest(
serving_config=serving_config,
query=search_query,
page_size=10,
content_search_spec=content_search_spec,
query_expansion_spec=discoveryengine.SearchRequest.QueryExpansionSpec(
condition=discoveryengine.SearchRequest.QueryExpansionSpec.Condition.AUTO,
),
spell_correction_spec=discoveryengine.SearchRequest.SpellCorrectionSpec(
mode=discoveryengine.SearchRequest.SpellCorrectionSpec.Mode.AUTO
),
)

response = client.search(request)
return response


QUERY = "Who is the CEO of Google?"


PROJECT_ID = <PROJECT ID>
DATASTORE_ID = <DATASTORE ID>
LOCATION = "global"

print(search_sample(PROJECT_ID, LOCATION, DATASTORE_ID, QUERY).summary.summary_text)

Output of above python script should print the summary for the query.

Do create datastores and engines in python refer to this Github Resource

\"\"

Vertex Search and Conversation was originally published in Google Cloud - Community on Medium, where people are continuing the conversation by highlighting and responding to this story.

",
"author"=>"Anita Gutta",
"link"=>"https://medium.com/google-cloud/vertex-search-and-conversation-364cdc591167?source=rss----e52cf94d98af---4",
"published_date"=>Mon, 08 Apr 2024 06:26:09.000000000 UTC +00:00,
"image_url"=>nil,
"feed_url"=>"https://medium.com/google-cloud/vertex-search-and-conversation-364cdc591167?source=rss----e52cf94d98af---4",
"language"=>nil,
"active"=>true,
"ricc_source"=>"feedjira::v1",
"created_at"=>Tue, 09 Apr 2024 12:41:49.788355000 UTC +00:00,
"updated_at"=>Mon, 21 Oct 2024 20:03:25.586325000 UTC +00:00,
"newspaper"=>"Google Cloud - Medium",
"macro_region"=>"Blogs"}
Edit this article
Back to articles