♊️ GemiNews 🗞️
(dev)
🏡
📰 Articles
🏷️ Tags
🧠 Queries
📈 Graphs
☁️ Stats
💁🏻 Assistant
💬
🎙️
Demo 1: Embeddings + Recommendation
Demo 2: Bella RAGa
Demo 3: NewRetriever
Demo 4: Assistant function calling
Editing article
Title
Summary
Content
<figure><img alt="" src="https://cdn-images-1.medium.com/max/797/1*KUZsLKGRzS9-M2dH8GbPbw.png" /></figure><p>Analyzing customer complaints is crucial for businesses as it enhances customer experience and fosters trust by providing insights into areas that need improvement.</p><p>Summarization adds value by condensing vast amounts of feedback into actionable insights, enabling businesses to quickly identify trends, prioritize issues, and implement targeted solutions. This efficient process empowers businesses to proactively address customer concerns, improve products or services, and ultimately, improve customer satisfaction and gain more loyalty.</p><p>In this post, my main goal is to condense lengthy customer complaints (<a href="https://www.consumerfinance.gov/">Consumer Finance Protection Bureau </a>(CFPB) data) and extract relevant important information from them efficiently. I guide you through my utilization of Vertex AI PaLM2 along with LangChain and compare the results of the summarized complaint with an open source LLM (LaMini-Flan-T5–248M) alongside LangChain.</p><pre>#install required libraries<br><br>!pip install huggingface-hub<br>!pip install langchain<br>!pip install transformers</pre><pre>#load require librraies<br>import pandas as pd<br><br>#import hface pipeline from langchain and summarize chain<br>from langchain.llms import HuggingFacePipeline<br>from langchain.chains.summarize import load_summarize_chain<br>from langchain.text_splitter import RecursiveCharacterTextSplitter<br>from langchain import PromptTemplate, LLMChain<br><br># load Vertex AI<br>from langchain.llms import VertexAI<br></pre><p><strong>Define LLM Model</strong></p><p><strong><em>LLM Model: Vertext AI PaLM 2</em></strong></p><p><a href="https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/text">PaLM 2</a> is Google’s LLM approach to responsible Generative AI and is fine-tuned for different NLP tasks such as classification, summarization, and entity extraction.</p><pre>#Define Vertex AI PaLM 2 llm to generate response<br>llm = VertexAI(model_name='text-bison@001',<br> batch_size=100, #set this if you are using batch processing<br> model_kwargs={"temperature":0, "max_length":512}<br> )</pre><p><strong><em>LLM Model: LaMini-Flan-T5</em></strong></p><p>LaMini-Flan-T5–248 is an open source LLM; a refined iteration of google/flan-t5-base trained on the LaMini-instruction dataset with 2.58M samples.</p><pre>#defining the lamini model chackpoint in langchain<br>checkpoint = 'MBZUAI/LaMini-Flan-T5-248M'<br><br>#huggingfacepipeline details<br># Define llm to generate response<br>llm = HuggingFacePipeline.from_model_id(model_id=checkpoint,<br> batch_size=100 #set this if you are using batch processing<br> task ='text2text-generation'<br> model_kwargs={"temperature":0, "max_length":512})<br> </pre><p><strong>Define Text Splitter and Summarizer Chain</strong></p><p>Since some of the complaints have long description (that exceed the maximum allowed token size in LLM models), I use LangChain to split them into separate chunks using a “map_reduce” chain type. This will send each chunk separately to LLM in “Map” process, and then “Reduce” function will integrate all summaries together at the end. This is one way to summarize large documents but requires several calls to the LLM. It however, may impact the accuracy and performance.</p><p>See bellow how I defined a recursive text splitter and a prompt with <a href="https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/">PromptTemplate</a> to guide the LLM to summarize the text.</p><pre>#define a recursive text spitter to chucnk the complaints<br>text_splitter = RecursiveCharacterTextSplitter( <br> chunk_size = 1000, #I set a to chunck size of 1000 <br> chunk_overlap = 40,<br> length_function = len,<br>)<br><br><br>#set prompt template<br>prompt_template ="""<br>summarize the given text by high lighting most important information<br><br>{text}<br><br>Summary:<br> """ <br><br>#define prompt template<br>prompt = PromptTemplate(template=prompt_template, input_variables=["text"])<br><br>#define chain with a map_reduce type<br>chain = load_summarize_chain(llm, map_prompt=prompt, combine_prompt=prompt, verbose=True,chain_type="map_reduce")<br><br></pre><p>Summarization can be done in an “online” (for one complaint at a time) or “batch” for a batch/chunk of complaints.</p><p><strong>Online Summarization:</strong></p><pre>#for an online mode, just pass one complaint text<br><br>texts = text_splitter.create_documents([complaint_text])<br>summary = llm_chain.run(texts)<br>print(summary)</pre><p>Here you can see an example of a splitted complaint description:</p><pre>#example of output splitted texts<br>[<br> Document(<br> page_content='I am writing to formally complain about inaccurate and illegal reporting of transactions on <br>my credit report, which I believe violates the Fair Credit Reporting Act ( FCRA ) specifically 15 U.S. Code 1681a.I<br>have carefully reviewed my credit report, and I have identified several inaccuracies in the reporting of late <br>payments and utilization of credit. As per 15 U.S. Code 1681a, The term consumer reporting agency means any person <br>which, for monetary fees, dues, or on a cooperative nonprofit basis, regularly engages in whole or in part in the <br>practice of assembling or evaluating consumer credit information or other information on consumers for the purpose <br>of furnishing consumer reports to third parties, and which uses any means or facility of interstate commerce for <br>the purpose of preparing or furnishing consumer reports. \\n The term consumer means an individual. \\n The term <br>consumer report means any written, oral, or other communication of \\nany information by a consumer reporting'<br> ),<br> Document(<br> page_content="information by a consumer reporting agency bearing on a consumers credit worthiness, credit <br>standing, credit capacity, character, general reputation, personal characteristics, or mode of living. * ( 2 ) <br>Exclusions \\n ( A ) ( i ) report containing information solely as to transactions or experiences between the <br>consumer and the person making the report ; \\n It is illegal to report inaccurate information that adversely <br>affects a consumer 's creditworthiness. Below are the specific discrepancies that I have identified Account number <br>Account type : Home Equity. Date opened Late payments recognized on of . Account number Account type : Credit card.<br>Date opened Late payments recognized on of Account number Account type : Auto Loan. Date opened Late payments <br>recognized on of Account number Account type : Home Equity. Date opened 104 % credit utilization . Account number <br>Account type : Credit card. Date opened 1 % utilization Account number Account type : Credit Card. Date opened 1 %"<br> ),<br> Document(<br> page_content='type : Credit Card. Date opened 1 % Utilization.\\n I am formally requesting that you conduct<br>a thorough investigation into these matters, as required by the FCRA. I kindly request that you promptly correct <br>the inaccurate information on my credit report by removing the incorrect late payments and adjusting the reported <br>credit utilization. I understand that under 15 U.S. Code 1681i, you are required to conduct a reasonable <br>investigation within 30 days of receiving a dispute. I urge you to adhere to this statutory requirement and provide<br>me with written notification of the results of your investigation. If the investigation confirms the inaccuracies, <br>I request that you update my credit report accordingly and provide me with a revised copy. Additionally, I would <br>appreciate it if you could provide me with information on the steps taken to prevent such errors in the future. If <br>my concerns are not addressed within the stipulated time frame, I will have no choice but to escalate this matter'<br> ),<br> Document(<br> page_content='no choice but to escalate this matter to the Consumer Financial Protection Bureau. Please <br>treat this matter with the urgency it deserves.'<br> )<br>]</pre><p>and here a pretty good overall summary of important information in the complaint has been highlighted:</p><pre>#Example of output summary using LaMini-Flan-T5<br><br>The person is expressing a complaint about inaccurate and illegal reporting of transactions on their credit report,<br>which violates the Fair Credit Reporting Act (FCRA) specifically 15 U.S. Code 1681a. The person identified several <br>discrepancies in the reporting of late payments and utilization of credit, and is requesting a thorough <br>investigation into the credit card utilization and the FCRA's requirement to conduct a reasonable investigation <br>within 30 days of receiving a dispute. They request to update their credit report and provide information on steps <br>to prevent future errors.</pre><pre>#Example of output summary using Vertext AI PaLM 2 LLM<br><br>The writer is writing to formally complain about inaccurate and illegal <br>reporting of transactions on their credit report. The writer believes that <br>this violates the Fair Credit Reporting Act (FCRA). The writer has carefully reviewed their credit report and has identified several inaccuracies in the reporting of late payments and utilization of credit. The writer is requesting that the consumer reporting agency correct the inaccuracies in their credit report.\n\nIt is illegal to report inaccurate information that adversely affects a consumer's creditworthiness.<br>The specific discrepancies are:\n\n- Account number Account type: Home Equity. Date opened Late payments recognized on of .\n- Account number Account type: Credit card</pre><p><strong>Batch Summarization:</strong></p><p>For batch processing, “batch” execution mode of LangChain should be called:</p><pre>def split_doc(text_splitter,doc):<br> """<br> function to split an input document using Langchain<br> Args:<br> text_splitter: a langchain text splitter<br> doc: string text <br> Output:<br> texts: a dictionary of splitted text<br> """<br> texts = text_splitter.create_documents([doc])<br> <br> return texts<br><br>def summarize_docs(docs,llm_chain):<br> """<br> function to summarize chunked documents<br> Args:<br> llm_chain: a langchain summarize chain<br> docs: chunked documents<br> Output:<br> summaries: list of summarized documents<br> """<br> #summarize all chunks in one go<br> summary = llm_chain.batch(docs)<br> <br> summaries=[]<br> #extract summaries <br> for summarized_doc in summary:<br> summaries.append(summarized_doc['output_text'])<br> <br> return summaries</pre><pre><br>#load complaints data<br>#read data from file storage <br>df_complaints=read_data()<br><br>#set the complaint description column for summarizing<br>desc_col='Consumer complaint narrative'<br><br>#chunk all complaints<br>docs=df_complaints[desc_col].apply(lambda doc: split_doc(text_splitter,doc) )<br><br>#extract and concatenate summaries to the original data<br>df_complaints['summarized_narrative']=summarize_docs(docs,llm_chain)</pre><p><strong>Evaluation:</strong></p><p>Human assessment plays a crucial role in evaluating summarization tasks. It is essential to thoroughly review the generated output to ensure it is concise and also maintains the core objectives of the original text.</p><p>To ensure that summaries are aligned closely with human perception, selected samples of summarized documents can be compared with human interpretations, and their<em> </em><a href="https://www.freecodecamp.org/news/what-is-rouge-and-how-it-works-for-evaluation-of-summaries-e059fb8ac840/#:~:text=ROUGE%20stands%20for%20Recall%2DOriented,as%20well%20as%20machine%20translations.&text=If%20we%20consider%20just%20the,and%20reference%20summary%20is%206."><em>ROUGE</em> </a>(Recall-Oriented Understudy for Gisting Evaluation) score can be calculated. ROUGE comprises metrics that help effective evaluation of automatic text summarization and machine translations.</p><p><strong>Final Note:</strong></p><p>In this article, I’ve showcased the development of a scalable summarization solution. Both LaMini-Flan-T5 and Vertex AI PaLM 2 API (taking into account the<a href="https://cloud.google.com/vertex-ai/generative-ai/pricing"> associated costs</a>) models along with LangChain exhibited strong performance in extracting important highlights from complaints, showcasing robust capabilities in generating AI-powered summaries.</p><p>In upcoming posts, I’ll employ BERTopic and LLM to identify the predominant trends in customer complaints and uncover the root causes behind these issues. This analysis aims to provide valuable insights for businesses.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=b14213a6f288" width="1" height="1" alt=""><hr><p><a href="https://medium.com/google-cloud/from-genai-to-insights-from-your-customers-part-1-b14213a6f288">From GenAI to Insights from Your Customers (Part 1)</a> was originally published in <a href="https://medium.com/google-cloud">Google Cloud - Community</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>
Author
Link
Published date
Image url
Feed url
Guid
Hidden blurb
--- !ruby/object:Feedjira::Parser::RSSEntry title: From GenAI to Insights from Your Customers (Part 1) url: https://medium.com/google-cloud/from-genai-to-insights-from-your-customers-part-1-b14213a6f288?source=rss----e52cf94d98af---4 author: Tara Pourhabibi categories: - text-summarization - google-cloud-platform - large-language-models - generative-ai - vertex-ai published: 2024-04-02 01:05:25.000000000 Z entry_id: !ruby/object:Feedjira::Parser::GloballyUniqueIdentifier is_perma_link: 'false' guid: https://medium.com/p/b14213a6f288 carlessian_info: news_filer_version: 2 newspaper: Google Cloud - Medium macro_region: Blogs rss_fields: - title - url - author - categories - published - entry_id - content content: '<figure><img alt="" src="https://cdn-images-1.medium.com/max/797/1*KUZsLKGRzS9-M2dH8GbPbw.png" /></figure><p>Analyzing customer complaints is crucial for businesses as it enhances customer experience and fosters trust by providing insights into areas that need improvement.</p><p>Summarization adds value by condensing vast amounts of feedback into actionable insights, enabling businesses to quickly identify trends, prioritize issues, and implement targeted solutions. This efficient process empowers businesses to proactively address customer concerns, improve products or services, and ultimately, improve customer satisfaction and gain more loyalty.</p><p>In this post, my main goal is to condense lengthy customer complaints (<a href="https://www.consumerfinance.gov/">Consumer Finance Protection Bureau </a>(CFPB) data) and extract relevant important information from them efficiently. I guide you through my utilization of Vertex AI PaLM2 along with LangChain and compare the results of the summarized complaint with an open source LLM (LaMini-Flan-T5–248M) alongside LangChain.</p><pre>#install required libraries<br><br>!pip install huggingface-hub<br>!pip install langchain<br>!pip install transformers</pre><pre>#load require librraies<br>import pandas as pd<br><br>#import hface pipeline from langchain and summarize chain<br>from langchain.llms import HuggingFacePipeline<br>from langchain.chains.summarize import load_summarize_chain<br>from langchain.text_splitter import RecursiveCharacterTextSplitter<br>from langchain import PromptTemplate, LLMChain<br><br># load Vertex AI<br>from langchain.llms import VertexAI<br></pre><p><strong>Define LLM Model</strong></p><p><strong><em>LLM Model: Vertext AI PaLM 2</em></strong></p><p><a href="https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/text">PaLM 2</a> is Google’s LLM approach to responsible Generative AI and is fine-tuned for different NLP tasks such as classification, summarization, and entity extraction.</p><pre>#Define Vertex AI PaLM 2 llm to generate response<br>llm = VertexAI(model_name='text-bison@001',<br> batch_size=100, #set this if you are using batch processing<br> model_kwargs={"temperature":0, "max_length":512}<br> )</pre><p><strong><em>LLM Model: LaMini-Flan-T5</em></strong></p><p>LaMini-Flan-T5–248 is an open source LLM; a refined iteration of google/flan-t5-base trained on the LaMini-instruction dataset with 2.58M samples.</p><pre>#defining the lamini model chackpoint in langchain<br>checkpoint = 'MBZUAI/LaMini-Flan-T5-248M'<br><br>#huggingfacepipeline details<br># Define llm to generate response<br>llm = HuggingFacePipeline.from_model_id(model_id=checkpoint,<br> batch_size=100 #set this if you are using batch processing<br> task ='text2text-generation'<br> model_kwargs={"temperature":0, "max_length":512})<br> </pre><p><strong>Define Text Splitter and Summarizer Chain</strong></p><p>Since some of the complaints have long description (that exceed the maximum allowed token size in LLM models), I use LangChain to split them into separate chunks using a “map_reduce” chain type. This will send each chunk separately to LLM in “Map” process, and then “Reduce” function will integrate all summaries together at the end. This is one way to summarize large documents but requires several calls to the LLM. It however, may impact the accuracy and performance.</p><p>See bellow how I defined a recursive text splitter and a prompt with <a href="https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/">PromptTemplate</a> to guide the LLM to summarize the text.</p><pre>#define a recursive text spitter to chucnk the complaints<br>text_splitter = RecursiveCharacterTextSplitter( <br> chunk_size = 1000, #I set a to chunck size of 1000 <br> chunk_overlap = 40,<br> length_function = len,<br>)<br><br><br>#set prompt template<br>prompt_template ="""<br>summarize the given text by high lighting most important information<br><br>{text}<br><br>Summary:<br> """ <br><br>#define prompt template<br>prompt = PromptTemplate(template=prompt_template, input_variables=["text"])<br><br>#define chain with a map_reduce type<br>chain = load_summarize_chain(llm, map_prompt=prompt, combine_prompt=prompt, verbose=True,chain_type="map_reduce")<br><br></pre><p>Summarization can be done in an “online” (for one complaint at a time) or “batch” for a batch/chunk of complaints.</p><p><strong>Online Summarization:</strong></p><pre>#for an online mode, just pass one complaint text<br><br>texts = text_splitter.create_documents([complaint_text])<br>summary = llm_chain.run(texts)<br>print(summary)</pre><p>Here you can see an example of a splitted complaint description:</p><pre>#example of output splitted texts<br>[<br> Document(<br> page_content='I am writing to formally complain about inaccurate and illegal reporting of transactions on <br>my credit report, which I believe violates the Fair Credit Reporting Act ( FCRA ) specifically 15 U.S. Code 1681a.I<br>have carefully reviewed my credit report, and I have identified several inaccuracies in the reporting of late <br>payments and utilization of credit. As per 15 U.S. Code 1681a, The term consumer reporting agency means any person <br>which, for monetary fees, dues, or on a cooperative nonprofit basis, regularly engages in whole or in part in the <br>practice of assembling or evaluating consumer credit information or other information on consumers for the purpose <br>of furnishing consumer reports to third parties, and which uses any means or facility of interstate commerce for <br>the purpose of preparing or furnishing consumer reports. \\n The term consumer means an individual. \\n The term <br>consumer report means any written, oral, or other communication of \\nany information by a consumer reporting'<br> ),<br> Document(<br> page_content="information by a consumer reporting agency bearing on a consumers credit worthiness, credit <br>standing, credit capacity, character, general reputation, personal characteristics, or mode of living. * ( 2 ) <br>Exclusions \\n ( A ) ( i ) report containing information solely as to transactions or experiences between the <br>consumer and the person making the report ; \\n It is illegal to report inaccurate information that adversely <br>affects a consumer 's creditworthiness. Below are the specific discrepancies that I have identified Account number <br>Account type : Home Equity. Date opened Late payments recognized on of . Account number Account type : Credit card.<br>Date opened Late payments recognized on of Account number Account type : Auto Loan. Date opened Late payments <br>recognized on of Account number Account type : Home Equity. Date opened 104 % credit utilization . Account number <br>Account type : Credit card. Date opened 1 % utilization Account number Account type : Credit Card. Date opened 1 %"<br> ),<br> Document(<br> page_content='type : Credit Card. Date opened 1 % Utilization.\\n I am formally requesting that you conduct<br>a thorough investigation into these matters, as required by the FCRA. I kindly request that you promptly correct <br>the inaccurate information on my credit report by removing the incorrect late payments and adjusting the reported <br>credit utilization. I understand that under 15 U.S. Code 1681i, you are required to conduct a reasonable <br>investigation within 30 days of receiving a dispute. I urge you to adhere to this statutory requirement and provide<br>me with written notification of the results of your investigation. If the investigation confirms the inaccuracies, <br>I request that you update my credit report accordingly and provide me with a revised copy. Additionally, I would <br>appreciate it if you could provide me with information on the steps taken to prevent such errors in the future. If <br>my concerns are not addressed within the stipulated time frame, I will have no choice but to escalate this matter'<br> ),<br> Document(<br> page_content='no choice but to escalate this matter to the Consumer Financial Protection Bureau. Please <br>treat this matter with the urgency it deserves.'<br> )<br>]</pre><p>and here a pretty good overall summary of important information in the complaint has been highlighted:</p><pre>#Example of output summary using LaMini-Flan-T5<br><br>The person is expressing a complaint about inaccurate and illegal reporting of transactions on their credit report,<br>which violates the Fair Credit Reporting Act (FCRA) specifically 15 U.S. Code 1681a. The person identified several <br>discrepancies in the reporting of late payments and utilization of credit, and is requesting a thorough <br>investigation into the credit card utilization and the FCRA's requirement to conduct a reasonable investigation <br>within 30 days of receiving a dispute. They request to update their credit report and provide information on steps <br>to prevent future errors.</pre><pre>#Example of output summary using Vertext AI PaLM 2 LLM<br><br>The writer is writing to formally complain about inaccurate and illegal <br>reporting of transactions on their credit report. The writer believes that <br>this violates the Fair Credit Reporting Act (FCRA). The writer has carefully reviewed their credit report and has identified several inaccuracies in the reporting of late payments and utilization of credit. The writer is requesting that the consumer reporting agency correct the inaccuracies in their credit report.\n\nIt is illegal to report inaccurate information that adversely affects a consumer's creditworthiness.<br>The specific discrepancies are:\n\n- Account number Account type: Home Equity. Date opened Late payments recognized on of .\n- Account number Account type: Credit card</pre><p><strong>Batch Summarization:</strong></p><p>For batch processing, “batch” execution mode of LangChain should be called:</p><pre>def split_doc(text_splitter,doc):<br> """<br> function to split an input document using Langchain<br> Args:<br> text_splitter: a langchain text splitter<br> doc: string text <br> Output:<br> texts: a dictionary of splitted text<br> """<br> texts = text_splitter.create_documents([doc])<br> <br> return texts<br><br>def summarize_docs(docs,llm_chain):<br> """<br> function to summarize chunked documents<br> Args:<br> llm_chain: a langchain summarize chain<br> docs: chunked documents<br> Output:<br> summaries: list of summarized documents<br> """<br> #summarize all chunks in one go<br> summary = llm_chain.batch(docs)<br> <br> summaries=[]<br> #extract summaries <br> for summarized_doc in summary:<br> summaries.append(summarized_doc['output_text'])<br> <br> return summaries</pre><pre><br>#load complaints data<br>#read data from file storage <br>df_complaints=read_data()<br><br>#set the complaint description column for summarizing<br>desc_col='Consumer complaint narrative'<br><br>#chunk all complaints<br>docs=df_complaints[desc_col].apply(lambda doc: split_doc(text_splitter,doc) )<br><br>#extract and concatenate summaries to the original data<br>df_complaints['summarized_narrative']=summarize_docs(docs,llm_chain)</pre><p><strong>Evaluation:</strong></p><p>Human assessment plays a crucial role in evaluating summarization tasks. It is essential to thoroughly review the generated output to ensure it is concise and also maintains the core objectives of the original text.</p><p>To ensure that summaries are aligned closely with human perception, selected samples of summarized documents can be compared with human interpretations, and their<em> </em><a href="https://www.freecodecamp.org/news/what-is-rouge-and-how-it-works-for-evaluation-of-summaries-e059fb8ac840/#:~:text=ROUGE%20stands%20for%20Recall%2DOriented,as%20well%20as%20machine%20translations.&text=If%20we%20consider%20just%20the,and%20reference%20summary%20is%206."><em>ROUGE</em> </a>(Recall-Oriented Understudy for Gisting Evaluation) score can be calculated. ROUGE comprises metrics that help effective evaluation of automatic text summarization and machine translations.</p><p><strong>Final Note:</strong></p><p>In this article, I’ve showcased the development of a scalable summarization solution. Both LaMini-Flan-T5 and Vertex AI PaLM 2 API (taking into account the<a href="https://cloud.google.com/vertex-ai/generative-ai/pricing"> associated costs</a>) models along with LangChain exhibited strong performance in extracting important highlights from complaints, showcasing robust capabilities in generating AI-powered summaries.</p><p>In upcoming posts, I’ll employ BERTopic and LLM to identify the predominant trends in customer complaints and uncover the root causes behind these issues. This analysis aims to provide valuable insights for businesses.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=b14213a6f288" width="1" height="1" alt=""><hr><p><a href="https://medium.com/google-cloud/from-genai-to-insights-from-your-customers-part-1-b14213a6f288">From GenAI to Insights from Your Customers (Part 1)</a> was originally published in <a href="https://medium.com/google-cloud">Google Cloud - Community</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>'
Language
Active
Ricc internal notes
Imported via /Users/ricc/git/gemini-news-crawler/webapp/db/seeds.d/import-feedjira.rb on 2024-04-02 21:51:22 +0200. Content is EMPTY here. Entried: title,url,author,categories,published,entry_id,content. TODO add Newspaper: filename = /Users/ricc/git/gemini-news-crawler/webapp/db/seeds.d/../../../crawler/out/feedjira/Blogs/Google Cloud - Medium/2024-04-02-From_GenAI_to_Insights_from_Your_Customers_(Part_1)-v2.yaml
Ricc source
Show this article
Back to articles