中文亚洲精品无码_熟女乱子伦免费_人人超碰人人爱国产_亚洲熟妇女综合网

當(dāng)前位置: 首頁 > news >正文

上海商場網(wǎng)站開發(fā)外貿(mào)營銷推廣

上海商場網(wǎng)站開發(fā),外貿(mào)營銷推廣,中國優(yōu)秀網(wǎng)站設(shè)計(jì),wordpress digg作者:來自 Elastic Gustavo Llermaly 我們目前所知道的搜索(搜索欄、結(jié)果、過濾器、頁面等)已經(jīng)取得了長足的進(jìn)步,并實(shí)現(xiàn)了多種不同的功能。當(dāng)我們知道找到所需內(nèi)容所需的關(guān)鍵字或知道哪些文檔包含我們想要的信息時,尤…

作者:來自 Elastic?Gustavo Llermaly

我們目前所知道的搜索(搜索欄、結(jié)果、過濾器、頁面等)已經(jīng)取得了長足的進(jìn)步,并實(shí)現(xiàn)了多種不同的功能。當(dāng)我們知道找到所需內(nèi)容所需的關(guān)鍵字或知道哪些文檔包含我們想要的信息時,尤其如此。但是,當(dāng)結(jié)果是包含長文本的文檔時,除了閱讀和總結(jié)之外,我們還需要額外的步驟來獲得最終答案。因此,為了簡化此過程,Google 及其搜索生成體驗(yàn) (?Search Generative Experience -?SGE) 等公司使用 AI 通過 AI 摘要來補(bǔ)充搜索結(jié)果。

如果我告訴你,你可以使用 Elastic 做同樣的事情,你會怎么做?

在本文中,你將學(xué)習(xí)創(chuàng)建一個 React 組件,該組件將顯示回答用戶問題的 AI 摘要以及搜索結(jié)果,以幫助用戶更快地回答他們的問題。我們還將要求模型提供引用,以便答案以搜索結(jié)果為基礎(chǔ)。

最終結(jié)果將如下所示:

你可以在此處找到完整的工作示例存儲庫。

步驟

  1. 創(chuàng)建端點(diǎn)
  2. 創(chuàng)建索引
  3. 索引數(shù)據(jù)
  4. 創(chuàng)建組件
  5. 提出問題

創(chuàng)建端點(diǎn)

在創(chuàng)建端點(diǎn)之前,請先查看該項(xiàng)目的高級架構(gòu)。

從 UI 使用 Elasticsearch 的推薦方法是代理調(diào)用,因此我們將為此目的啟動 UI 可以連接的后端。你可以在此處閱讀有關(guān)此方法的更多信息。

重要提示:本文概述的方法提供了一種處理 Elasticsearch 查詢和生成摘要的簡單方法。在實(shí)施此解決方案之前,請考慮你的具體用例和要求。更合適的架構(gòu)將涉及在代理后面的同一 API 調(diào)用下進(jìn)行搜索和完成。

嵌入端點(diǎn)

為了啟用語義搜索,我們將使用 ELSER 模型來幫助我們不僅通過單詞匹配進(jìn)行查找,而且還通過語義含義進(jìn)行查找。

你可以使用 Kibana UI 創(chuàng)建 ELSER 端點(diǎn):

或者通過 _inference API:

PUT _inference/sparse_embedding/elser-embeddings
{"service": "elser","service_settings": {"model_id": ".elser_model_2","num_allocations": 1,"num_threads": 1}
}

Completion 端點(diǎn)

要生成 AI 摘要,我們必須將相關(guān)文檔作為上下文和用戶查詢發(fā)送到模型。為此,我們創(chuàng)建了一個連接到 OpenAI 的完成端點(diǎn)。如果你不想與 OpenAI 合作,你還可以在不斷增加的不同提供商列表中進(jìn)行選擇。

PUT _inference/completion/summaries-completion
{"service": "openai","service_settings": {"api_key": "<API_KEY>","model_id": "gpt-4o-mini"}
}

每次用戶運(yùn)行搜索時,我們都會調(diào)用模型,因此我們需要速度和成本效益,這是一個測試新款的好機(jī)會。

索引數(shù)據(jù)

由于我們正在為網(wǎng)站添加搜索體驗(yàn),因此我們可以使用 Elastic 網(wǎng)絡(luò)爬蟲來索引網(wǎng)站內(nèi)容并使用我們自己的文檔進(jìn)行測試。在此示例中,我將使用 Elastic Labs Blog。

要創(chuàng)建爬蟲,請按照文檔中的說明進(jìn)行操作。

在此示例中,我們將使用以下設(shè)置:

注意:我添加了一些提取規(guī)則來清理字段值。我還使用了爬蟲中的 semantic_text 字段,并將其與 article_content 字段關(guān)聯(lián)起來

提取字段的簡要說明:

meta_img:用作縮略圖的文章圖像。meta_author:作者姓名,可按作者進(jìn)行篩選。article_content:我們僅索引 div 中文章的主要內(nèi)容,排除頁眉和頁腳等不相關(guān)的數(shù)據(jù)。此優(yōu)化通過生成更短的嵌入來增強(qiáng)搜索相關(guān)性并降低成本。

應(yīng)用規(guī)則并成功執(zhí)行抓取后,文檔的外觀如下:

{"_index": "search-labs-index","_id": "66a5568a30cc8eb607eec315","_version": 1,"_seq_no": 6,"_primary_term": 3,"found": true,"_source": {"last_crawled_at": "2024-07-27T20:20:25Z","url_path_dir3": "langchain-collaboration","meta_img": "https://www.elastic.co/search-labs/assets/images/langchain-partner-blog.png?5c6faef66d5699625c50453e356927d0","semantic_text": {"inference": {"inference_id": "elser_model_2","model_settings": {"task_type": "sparse_embedding"},"chunks": [{"text": """Tutorials Integrations Blog Start Free Trial Contact Sales Open navigation menu Blog / Generative AI LangChain and Elastic collaborate to add vector database and semantic reranking for RAG In the last year, we have seen a lot of movement in generative AI. Many new services and libraries have emerged. LangChain has separated itself as the most popular library for building applications with large language models (LLMs), for example Retrieval Augmented Generation (RAG) systems. The library makes it really easy to prototype and experiment with different models and retrieval systems. To enable the first-class support for Elasticsearch in LangChain, we recently elevated our integration from a community package to an official LangChain partner package . This work makes it straightforward to import Elasticsearch capabilities into LangChain applications. The Elastic team manages the code and the release process through a dedicated repository . We will keep improving the LangChain integration there, making sure that users can take full advantage of the latest improvements in Elasticsearch. Our collaboration with Elastic in the last 12 months has been exceptional, particularly as we establish better ways for developers and end users to build RAG applications from prototype to production," said Harrison Chase, Co-Founder and CEO at LangChain. "The LangChain-Elasticsearch vector database integrations will help do just that, and we're excited to see this partnership grow with future feature and integration releases. Elasticsearch is one of the most flexible and performant retrieval systems that includes a vector database. One of our goals at Elastic is to also be the most open retrieval system out there. In a space as fast-moving as generative AI, we want to have the developer's back when it comes to utilizing emerging tools and libraries. This is why we work closely with libraries like LangChain and add native support to the GenAI ecosystem. From using Elasticsearch as a vector database to hybrid search and orchestrating a full RAG application. Elasticsearch and LangChain have collaborated closely this year. We are putting our extensive experience in building search tools into making your experience of LangChain easier and more flexible. Let's take a deeper look in this blog. Rapid RAG prototyping RAG is a technique for providing users with highly relevant answers to questions. The main advantages over using LLMs directly are that user data can be easily integrated, and hallucinations by the LLM can be minimized. This is achieved by adding a document retrieval step that provides relevant context for the""","embeddings": {"rag": 2.2831416,"elastic": 2.1994505,"genera": 1.990228,"lang": 1.9417559,"vector": 1.7541072,"##ai": 1.5763651,"integration": 1.5619806,"##sea": 1.5154194,"##rank": 1.4946039,"retrieval": 1.3957807,"ll": 1.362704 // more embeddings ...}}]}},"additional_urls": ["https://www.elastic.co/search-labs/blog/langchain-collaboration"],"body_content": """Tutorials Integrations Blog Start Free Trial Contact Sales Open navigation menu Blog / Generative AI LangChain and Elastic collaborate to add vector database and semantic reranking for RAG In the last year, we have seen a lot of movement in generative AI. Many new services and libraries have emerged. LangChain has separated itself as the most popular library for building applications with large language models (LLMs), for example Retrieval Augmented Generation (RAG) systems. The library makes it really easy to prototype and experiment with different models and retrieval systems. To enable the first-class support for Elasticsearch in LangChain, we recently elevated our integration from a community package to an official LangChain partner package . This work makes it straightforward to import Elasticsearch capabilities into LangChain applications. The Elastic team manages the code and the release process through a dedicated repository . We will keep improving the LangChain integration there, making sure that users can take full advantage of the latest improvements in Elasticsearch. Our collaboration with Elastic in the last 12 months has been exceptional, particularly as we establish better ways for developers and end users to build RAG applications from prototype to production," said Harrison Chase, Co-Founder and CEO at LangChain. "The LangChain-Elasticsearch vector database integrations will help do just that, and we're excited to see this partnership grow with future feature and integration releases. Elasticsearch is one of the most flexible and performant retrieval systems that includes a vector database. One of our goals at Elastic is to also be the most open retrieval system out there. In a space as fast-moving as generative AI, we want to have the developer's back when it comes to utilizing emerging tools and libraries. This is why we work closely with libraries like LangChain and add native support to the GenAI ecosystem. From using Elasticsearch as a vector database to hybrid search and orchestrating a full RAG application. Elasticsearch and LangChain have collaborated closely this year. We are putting our extensive experience in building search tools into making your experience of LangChain easier and more flexible. Let's take a deeper look in this blog. Rapid RAG prototyping RAG is a technique for providing users with highly relevant answers to questions. The main advantages over using LLMs directly are that user data can be easily integrated, and hallucinations by the LLM can be minimized. This is achieved by adding a document retrieval step that provides relevant context for the LLM. Since its inception, Elasticsearch has been the go-to solution for relevant document retrieval and has since been a leading innovator, offering numerous retrieval strategies. When it comes to integrating Elasticsearch into LangChain, we have made it easy to choose between the most common retrieval strategies, for example, dense vector, sparse vector, keyword or hybrid. And we enabled power users to further customize these strategies. Keep reading to see some examples. (Note that we assume we have an Elasticsearch deployment .) LangChain integration package In order to use the langchain-elasticsearch partner package, you first need to install it: pip install langchain-elasticsearch Then you can import the classes you need from the langchain_elasticsearch module, for example, the ElasticsearchStore , which gives you simple methods to index and search your data. In this example, we use Elastic's sparse vector model ELSER (which has to be deployed first) as our retrieval strategy. from langchain_elasticsearch import ElasticsearchStore es_store = ElasticsearchStore( es_cloud_id="your-cloud-id", es_api_key="your-api-key", index_name="rag-example", strategy=ElasticsearchStore.SparseVectorRetrievalStrategy(model_id=".elser_model_2"), ), A simple RAG application Now, let's build a simple RAG example application. First, we add some example documents to our Elasticsearch store. texts = [ "LangChain is a framework for developing applications powered by large language models (LLMs).", "Elasticsearch is a distributed, RESTful search and analytics engine capable of addressing a growing number of use cases.", ... ] es_store.add_texts(texts) Next, we define the LLM. Here, we use the default gpt-3.5-turbo model offered by OpenAI, which also powers ChatGPT. from langchain_openai import ChatOpenAI llm = ChatOpenAI(api_key="sk-...") # or set the OPENAI_API_KEY environment variable Now we are ready to plug together our RAG system. For simplicity we take a standard prompt for instructing the LLM. We also transform the Elasticsearch store into a LangChain retriever. Finally, we chain together the retrieval step with adding the documents to the prompt and sending it to the LLM. from langchain import hub from langchain_core.runnables import RunnablePassthrough prompt = hub.pull("rlm/rag-prompt") # standard prompt from LangChain hub retriever = es_store.as_retriever() def format_docs(docs): return "\n\n".join(doc.page_content for doc in docs) rag_chain = ( {"context": retriever | format_docs, "question": RunnablePassthrough()} | prompt | llm | StrOutputParser() ) With these few lines of code, we now already have a simple RAG system. Users can now ask questions on the data: rag_chain.invoke("Which frameworks can help me build LLM apps?") "LangChain is a framework specifically designed for building LLM-powered applications. ..." It's as simple as this. Our RAG system can now respond with info about LangChain, which ChatGPT (version 3.5) cannot. Of course there are many ways to improve this system. One of them is optimizing the way we retrieve the documents. Full retrieval flexibility through the Retriever The Elasticsearch store offers common retrieval strategies out-of-the-box, and developers can freely experiment with what works best for a given use case. But what if your data model is more complex than just text with a single field? What, for example, if your indexing setup includes a web crawler that yields documents with texts, titles, URLs and tags and all these fields are important for search? Elasticsearch's Query DSL gives users full control over how to search their data. And in LangChain, the ElasticsearchRetriever enables this full flexibility directly. All that is required is to define a function that maps the user input query to an Elasticsearch request. Let's say we want to add semantic reranking capabilities to our retrieval step. By adding a Cohere reranking step, the results at the top become more relevant without extra manual tuning. For this, we define a Retriever that takes in a function that returns the respective Query DSL structure. def text_similarity_reranking(search_query: str) -> Dict: return { "retriever": { "text_similarity_reranker": { "retriever": { "standard": { "query": { "match": { "text_field": search_query } } } }, "field": "text_field", "inference_id": "cohere-rerank-service", "inference_text": search_query, "window_size": 10 } } } retriever = ElasticsearchRetriever.from_es_params( es_cloud_id="your-cloud-id", es_api_key="your-api-key", index_name="rag-example", content_field=text_field, body_func=text_similarity_reranking, ) (Note that the query structure for similarity reranking is still being finalized. It will be available in an upcoming release.) This retriever can slot seamlessly into the RAG code above. The result is that the retrieval part of our RAG pipeline is much more accurate, leading to more relevant documents being forwarded to the LLM and, most importantly, to more relevant answers. Conclusion Elastic's continued investment into LangChain's ecosystem brings the latest retrieval innovations to one of the most popular GenAI libraries. Through this collaboration, Elastic and LangChain enable developers to rapidly and easily build RAG solutions for end users while providing the necessary flexibility for in-depth tuning of results quality. Ready to try this out on your own? Start a free trial . Looking to build RAG into your apps? Want to try different LLMs with a vector database? Check out our sample notebooks for LangChain, Cohere and more on Github, and join Elasticsearch Relevance Engine training now. Max Jakob 5 min read 11 June 2024 Generative AI Integrations Share Twitter Facebook LinkedIn Recommended Articles Integrations How To Generative AI ? 25 July 2024 Protecting Sensitive and PII information in RAG with Elasticsearch and LlamaIndex How to protect sensitive and PII data in a RAG application with Elasticsearch and LlamaIndex. Srikanth Manvi How To Generative AI ? 19 July 2024 Build a Conversational Search for your Customer Success Application with Elasticsearch and OpenAI Explore how to enhance your customer success application by implementing a conversational search feature using advanced technologies such as Large Language Models (LLMs) and Retrieval-Augmented Generation (RAG) Lionel Palacin Integrations How To Generative AI Vector Database ? 11 July 2024 semantic_text with Amazon Bedrock Using semantic_text new feature, and AWS Bedrock as inference endpoint service Gustavo Llermaly Integrations How To Generative AI Vector Database ? 10 July 2024 Elasticsearch open inference API adds Amazon Bedrock support Elasticsearch open inference API adds support for embeddings generated from models hosted on Amazon Bedrock." Mark Hoy Hemant Malik Vector Database How To Generative AI ? 10 July 2024 Playground: Experiment with RAG using Bedrock Anthropic Models and Elasticsearch in minutes Playground is a low code interface for developers to explore grounding LLMs of their choice with their own private data, in minutes. Joe McElroy Aditya Tripathi Max Jakob 5 min read 11 June 2024 Generative AI Integrations Share Twitter Facebook LinkedIn Jump to Rapid RAG prototyping LangChain integration package A simple RAG application Full retrieval flexibility through the Retriever Conclusion Sitemap RSS Feed Search Labs Repo Elastic.co ?2024. Elasticsearch B.V. All Rights Reserved.""","article_content": """In the last year, we have seen a lot of movement in generative AI. Many new services and libraries have emerged. LangChain has separated itself as the most popular library for building applications with large language models (LLMs), for example Retrieval Augmented Generation (RAG) systems. The library makes it really easy to prototype and experiment with different models and retrieval systems. To enable the first-class support for Elasticsearch in LangChain, we recently elevated our integration from a community package to an official LangChain partner package . This work makes it straightforward to import Elasticsearch capabilities into LangChain applications. The Elastic team manages the code and the release process through a dedicated repository . We will keep improving the LangChain integration there, making sure that users can take full advantage of the latest improvements in Elasticsearch. Our collaboration with Elastic in the last 12 months has been exceptional, particularly as we establish better ways for developers and end users to build RAG applications from prototype to production," said Harrison Chase, Co-Founder and CEO at LangChain. "The LangChain-Elasticsearch vector database integrations will help do just that, and we're excited to see this partnership grow with future feature and integration releases. Elasticsearch is one of the most flexible and performant retrieval systems that includes a vector database. One of our goals at Elastic is to also be the most open retrieval system out there. In a space as fast-moving as generative AI, we want to have the developer's back when it comes to utilizing emerging tools and libraries. This is why we work closely with libraries like LangChain and add native support to the GenAI ecosystem. From using Elasticsearch as a vector database to hybrid search and orchestrating a full RAG application. Elasticsearch and LangChain have collaborated closely this year. We are putting our extensive experience in building search tools into making your experience of LangChain easier and more flexible. Let's take a deeper look in this blog. Rapid RAG prototyping RAG is a technique for providing users with highly relevant answers to questions. The main advantages over using LLMs directly are that user data can be easily integrated, and hallucinations by the LLM can be minimized. This is achieved by adding a document retrieval step that provides relevant context for the LLM. Since its inception, Elasticsearch has been the go-to solution for relevant document retrieval and has since been a leading innovator, offering numerous retrieval strategies. When it comes to integrating Elasticsearch into LangChain, we have made it easy to choose between the most common retrieval strategies, for example, dense vector, sparse vector, keyword or hybrid. And we enabled power users to further customize these strategies. Keep reading to see some examples. (Note that we assume we have an Elasticsearch deployment .) LangChain integration package In order to use the langchain-elasticsearch partner package, you first need to install it: pip install langchain-elasticsearch Then you can import the classes you need from the langchain_elasticsearch module, for example, the ElasticsearchStore , which gives you simple methods to index and search your data. In this example, we use Elastic's sparse vector model ELSER (which has to be deployed first) as our retrieval strategy. from langchain_elasticsearch import ElasticsearchStore es_store = ElasticsearchStore( es_cloud_id="your-cloud-id", es_api_key="your-api-key", index_name="rag-example", strategy=ElasticsearchStore.SparseVectorRetrievalStrategy(model_id=".elser_model_2"), ), A simple RAG application Now, let's build a simple RAG example application. First, we add some example documents to our Elasticsearch store. texts = [ "LangChain is a framework for developing applications powered by large language models (LLMs).", "Elasticsearch is a distributed, RESTful search and analytics engine capable of addressing a growing number of use cases.", ... ] es_store.add_texts(texts) Next, we define the LLM. Here, we use the default gpt-3.5-turbo model offered by OpenAI, which also powers ChatGPT. from langchain_openai import ChatOpenAI llm = ChatOpenAI(api_key="sk-...") # or set the OPENAI_API_KEY environment variable Now we are ready to plug together our RAG system. For simplicity we take a standard prompt for instructing the LLM. We also transform the Elasticsearch store into a LangChain retriever. Finally, we chain together the retrieval step with adding the documents to the prompt and sending it to the LLM. from langchain import hub from langchain_core.runnables import RunnablePassthrough prompt = hub.pull("rlm/rag-prompt") # standard prompt from LangChain hub retriever = es_store.as_retriever() def format_docs(docs): return "\n\n".join(doc.page_content for doc in docs) rag_chain = ( {"context": retriever | format_docs, "question": RunnablePassthrough()} | prompt | llm | StrOutputParser() ) With these few lines of code, we now already have a simple RAG system. Users can now ask questions on the data: rag_chain.invoke("Which frameworks can help me build LLM apps?") "LangChain is a framework specifically designed for building LLM-powered applications. ..." It's as simple as this. Our RAG system can now respond with info about LangChain, which ChatGPT (version 3.5) cannot. Of course there are many ways to improve this system. One of them is optimizing the way we retrieve the documents. Full retrieval flexibility through the Retriever The Elasticsearch store offers common retrieval strategies out-of-the-box, and developers can freely experiment with what works best for a given use case. But what if your data model is more complex than just text with a single field? What, for example, if your indexing setup includes a web crawler that yields documents with texts, titles, URLs and tags and all these fields are important for search? Elasticsearch's Query DSL gives users full control over how to search their data. And in LangChain, the ElasticsearchRetriever enables this full flexibility directly. All that is required is to define a function that maps the user input query to an Elasticsearch request. Let's say we want to add semantic reranking capabilities to our retrieval step. By adding a Cohere reranking step, the results at the top become more relevant without extra manual tuning. For this, we define a Retriever that takes in a function that returns the respective Query DSL structure. def text_similarity_reranking(search_query: str) -> Dict: return { "retriever": { "text_similarity_reranker": { "retriever": { "standard": { "query": { "match": { "text_field": search_query } } } }, "field": "text_field", "inference_id": "cohere-rerank-service", "inference_text": search_query, "window_size": 10 } } } retriever = ElasticsearchRetriever.from_es_params( es_cloud_id="your-cloud-id", es_api_key="your-api-key", index_name="rag-example", content_field=text_field, body_func=text_similarity_reranking, ) (Note that the query structure for similarity reranking is still being finalized. It will be available in an upcoming release.) This retriever can slot seamlessly into the RAG code above. The result is that the retrieval part of our RAG pipeline is much more accurate, leading to more relevant documents being forwarded to the LLM and, most importantly, to more relevant answers. Conclusion Elastic's continued investment into LangChain's ecosystem brings the latest retrieval innovations to one of the most popular GenAI libraries. Through this collaboration, Elastic and LangChain enable developers to rapidly and easily build RAG solutions for end users while providing the necessary flexibility for in-depth tuning of results quality.""","domains": ["https://www.elastic.co"],"title": "LangChain and Elastic collaborate to add vector database and semantic reranking for RAG — Search Labs","meta_author": ["Max Jakob"],"url": "https://www.elastic.co/search-labs/blog/langchain-collaboration","url_scheme": "https","meta_description": "Learn how LangChain and Elasticsearch can accelerate your speed of innovation in the LLM and GenAI space.","headings": ["LangChain and Elastic collaborate to add vector database and semantic reranking for RAG","Rapid RAG prototyping","LangChain integration package","A simple RAG application","Full retrieval flexibility through the Retriever","Conclusion","Protecting Sensitive and PII information in RAG with Elasticsearch and LlamaIndex","Build a Conversational Search for your Customer Success Application with Elasticsearch and OpenAI","semantic_text with Amazon Bedrock","Elasticsearch open inference API adds Amazon Bedrock support","Playground: Experiment with RAG using Bedrock Anthropic Models and Elasticsearch in minutes"],"links": ["https://cloud.elastic.co/registration?onboarding_token=search&cta=cloud-registration&tech=trial&plcmt=navigation&pg=search-labs","https://discuss.elastic.co/c/search/84","https://github.com/elastic/elasticsearch-labs","https://github.com/langchain-ai/langchain-elastic","https://pypi.org/project/langchain-elasticsearch/","https://python.langchain.com/v0.2/docs/integrations/providers/elasticsearch/","https://search.elastic.co/?location%5B0%5D=Search+Labs&referrer=https://www.elastic.co/search-labs/blog/langchain-collaboration","https://www.elastic.co/contact","https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl.html","https://www.elastic.co/guide/en/machine-learning/current/ml-nlp-elser.html#download-deploy-elser","https://www.elastic.co/search-labs","https://www.elastic.co/search-labs/blog","https://www.elastic.co/search-labs/blog","https://www.elastic.co/search-labs/blog/category/generative-ai","https://www.elastic.co/search-labs/blog/elasticsearch-cohere-rerank","https://www.elastic.co/search-labs/blog/langchain-collaboration#a-simple-rag-application","https://www.elastic.co/search-labs/blog/langchain-collaboration#conclusion","https://www.elastic.co/search-labs/blog/langchain-collaboration#full-retrieval-flexibility-through-the-retriever","https://www.elastic.co/search-labs/blog/langchain-collaboration#langchain-integration-package","https://www.elastic.co/search-labs/blog/langchain-collaboration#rapid-rag-prototyping","https://www.elastic.co/search-labs/blog/retrieval-augmented-generation-rag","https://www.elastic.co/search-labs/blog/semantic-reranking-with-retrievers","https://www.elastic.co/search-labs/integrations","https://www.elastic.co/search-labs/tutorials","https://www.elastic.co/search-labs/tutorials/install-elasticsearch"],"id": "66a5568a30cc8eb607eec315","url_port": 443,"url_host": "www.elastic.co","url_path_dir2": "blog","url_path": "/search-labs/blog/langchain-collaboration","url_path_dir1": "search-labs"}}

創(chuàng)建代理

要設(shè)置代理服務(wù)器,我們將使用 express.js。我們將按照最佳實(shí)踐創(chuàng)建兩個端點(diǎn):一個用于處理 _search 調(diào)用,另一個用于 completion 調(diào)用。

首先創(chuàng)建一個名為 es-proxy 的新目錄,使用 cd es-proxy 導(dǎo)航到該目錄,然后使用 npm init 初始化你的項(xiàng)目。

接下來,使用以下命令安裝必要的依賴項(xiàng):

yarn add express axios dotenv cors

以下是每個包的簡要說明:

express:用于創(chuàng)建代理服務(wù)器,該服務(wù)器將處理傳入的請求并將其轉(zhuǎn)發(fā)到 Elasticsearch。axios:一種流行的 HTTP 客戶端,可簡化對 Elasticsearch API 的請求。dotenv:允許你通過將敏感數(shù)據(jù)存儲在環(huán)境變量中來管理它們,例如 API 密鑰。cors:通過處理跨源資源共享 (CORS),使你的 UI 能夠向不同的域(在本例中為代理服務(wù)器)發(fā)出請求。當(dāng)你的前端后端托管在不同的域或端口上時,這對于避免出現(xiàn)問題至關(guān)重要。

現(xiàn)在,創(chuàng)建一個 .env 文件來安全地存儲你的 Elasticsearch URL 和 API 密鑰:

ELASTICSEARCH_URL=https://<your_elasticsearch_url>
API_KEY=<your_api_key>

確保你創(chuàng)建的 API 密鑰僅限于所需的索引,并且是只讀的

最后,創(chuàng)建一個 index.js 文件,內(nèi)容如下:

index.js

require("dotenv").config();const express = require("express");
const cors = require("cors");
const app = express();
const axios = require("axios");app.use(express.json());
app.use(cors());const { ELASTICSEARCH_URL, API_KEY } = process.env;// Handle all _search requests
app.post("/api/:index/_search", async (req, res) => {try {const response = await axios.post(`${ELASTICSEARCH_URL}/${req.params.index}/_search`,req.body,{headers: {"Content-Type": "application/json",Authorization: `ApiKey ${API_KEY}`,},});res.json(response.data);} catch (error) {res.status(500).json({ error: error.message });}
});// Handle all _completion requests
app.post("/api/completion", async (req, res) => {try {const response = await axios.post(`${ELASTICSEARCH_URL}/_inference/completion/summaries-completion`,req.body,{headers: {"Content-Type": "application/json",Authorization: `ApiKey ${API_KEY}`,},});res.json(response.data);} catch (error) {res.status(500).json({ error: error.message });}
});// Start the server
const PORT = process.env.PORT || 1337;
app.listen(PORT, () => {console.log(`Server is running on port ${PORT}`);
});

現(xiàn)在,通過運(yùn)行 node index.js 啟動服務(wù)器。這將默認(rèn)在端口 1337 上啟動服務(wù)器,或者在 .env 文件中定義的端口上啟動服務(wù)器。

創(chuàng)建組件

對于 UI 組件,我們將使用 Search UI React 庫 search-ui。我們將創(chuàng)建一個自定義組件,以便每次用戶運(yùn)行搜索時,它都會使用我們創(chuàng)建的 completion 推理端點(diǎn)將最佳結(jié)果發(fā)送到 LLM,然后將答案顯示給用戶。

你可以在此處找到有關(guān)配置實(shí)例的完整教程。你可以在計(jì)算機(jī)上運(yùn)行 search-ui,也可以在此處使用我們的在線沙盒。

運(yùn)行示例并連接到數(shù)據(jù)后,在啟動應(yīng)用程序文件夾中的終端中運(yùn)行以下安裝步驟:

yarn add axios antd html-react-parser

安裝其他依賴項(xiàng)后,為新組件創(chuàng)建一個新的 AiSummary.js 文件。這將包括一個簡單的提示,用于向 AI 提供說明和規(guī)則。

AiSummary.js

import { withSearch } from "@elastic/react-search-ui";
import { useState, useEffect } from "react";
import axios from "axios";
import { Card } from "antd";
import parse from "html-react-parser";const formatSearchResults = (results) => {return results.slice(0, 3).map((result) => `Article Author(s): ${result.meta_author.raw.join(",")}Article URL: ${result.url.raw}Article title: ${result.title.raw}Article content: ${result.article_content.raw}`).join("\n");
};const fetchAiSummary = async (searchTerm, results) => {const prompt = `You are a search assistant. Your mission is to complement search results with an AI Summary to address the user request.User request: ${searchTerm}Top search results: ${formatSearchResults(results)}Rules:- The answer must be short. No more than one paragraph.- Use HTML- Use content from the most relevant search results only to answer the user request- Add highlights wrapping in <i><b></b></i> tags the most important phrases of your answer- At the end of the answer add a citations section with links to the articles you got the answer on this format:<h4>Citations</h4><ul><li><a href="{url}"> {title} </a></li></ul>- Only provide citations from the top search results I showed you, and only if they are relevant to the user request.`;const responseData = await axios.post("http://localhost:1337/api/completion",{ input: prompt },{headers: {"Content-Type": "application/json",},});return responseData.data.completion[0].result;
};const AiSummary = ({ results, searchTerm, resultSearchTerm }) => {const [aiSummary, setAiSummary] = useState("");const [isLoading, setIsLoading] = useState(false);useEffect(() => {if (searchTerm) {setIsLoading(true);fetchAiSummary(searchTerm, results).then((summary) => {setAiSummary(summary);setIsLoading(false);});}}, [resultSearchTerm]);return (<Card style={{ width: "100%" }} loading={isLoading}><div><h2>AI Summary</h2>{!resultSearchTerm ? "Ask anything!" : parse(aiSummary)}</div></Card>);
};export default withSearch(({ results, searchTerm, resultSearchTerm }) => ({results,searchTerm,resultSearchTerm,AiSummary,
}))(AiSummary);

更新 App.js

現(xiàn)在我們創(chuàng)建了自定義組件,是時候?qū)⑵涮砑拥綉?yīng)用程序中了。你的 App.js 應(yīng)如下所示:

App.js

import React from "react";
import ElasticsearchAPIConnector from "@elastic/search-ui-elasticsearch-connector";
import {ErrorBoundary,SearchProvider,SearchBox,Results,Facet,
} from "@elastic/react-search-ui";
import { Layout } from "@elastic/react-search-ui-views";
import "@elastic/react-search-ui-views/lib/styles/styles.css";
import AiSummary from "./AiSummary";const connector = new ElasticsearchAPIConnector({host: "http://localhost:1337/api",index: "search-labs-index",},(requestBody, requestState) => {if (!requestState.searchTerm) return requestBody;requestBody.query = {semantic: {query: requestState.searchTerm,field: "semantic_text",},};return requestBody;}
);const config = {debug: true,searchQuery: {search_fields: {semantic_text: {},},result_fields: {title: {snippet: {},},article_content: {snippet: {size: 10,},},meta_description: {},url: {},meta_author: {},meta_img: {},},facets: {"meta_author.enum": { type: "value" },},},apiConnector: connector,alwaysSearchOnInitialLoad: false,
};export default function App() {return (<SearchProvider config={config}><div className="App"><ErrorBoundary><Layoutheader={<SearchBox />}bodyHeader={<AiSummary />}bodyContent={<ResultstitleField="title"thumbnailField="meta_img"urlField="url"/>}sideContent={<Facet key={"1"} field={"meta_author.enum"} label={"author"} />}/></ErrorBoundary></div></SearchProvider>);
}

請注意,在連接器實(shí)例中我們?nèi)绾胃采w默認(rèn)查詢以使用語義查詢并利用我們創(chuàng)建的 semantic_text 映射。

提出問題

現(xiàn)在是時候測試它了。提出有關(guān)你索引的文檔的任何問題,在搜索結(jié)果上方,你應(yīng)該會看到一張帶有 AI 摘要的卡片:

結(jié)論

重新設(shè)計(jì)你的搜索體驗(yàn)對于保持用戶的參與度非常重要,并節(jié)省他們?yōu)g覽結(jié)果以找到問題答案的時間。借助 Elastic 開放推理服務(wù)和搜索用戶界面,設(shè)計(jì)此類體驗(yàn)比以往任何時候都更容易。你準(zhǔn)備好嘗試了嗎?

準(zhǔn)備好自己嘗試了嗎?開始免費(fèi)試用。

Elasticsearch 集成了 LangChain、Cohere 等工具。加入我們的 Beyond RAG Basics 網(wǎng)絡(luò)研討會,構(gòu)建你的下一個 GenAI 應(yīng)用程序!

原文:Generate AI summaries with Elastic — Search Labs

http://m.risenshineclean.com/news/65955.html

相關(guān)文章:

  • 網(wǎng)站設(shè)計(jì)的基本知識結(jié)構(gòu)公關(guān)服務(wù)
  • 游戲網(wǎng)站建設(shè)多少錢營銷策劃方案模板范文
  • 廣州市網(wǎng)站建設(shè) 駿域動力北京已感染上千萬人
  • 深圳企業(yè)網(wǎng)站建設(shè)制作網(wǎng)絡(luò)流量統(tǒng)計(jì)工具
  • 如何用word做網(wǎng)站西安網(wǎng)絡(luò)公司
  • 廣州網(wǎng)站建設(shè)推廣網(wǎng)絡(luò)營銷策劃
  • 114做網(wǎng)站同城推廣
  • 網(wǎng)站建設(shè)集團(tuán)網(wǎng)站收錄提交
  • 武漢網(wǎng)站設(shè)計(jì)價格谷歌廣告上海有限公司官網(wǎng)
  • 不允許做企業(yè)網(wǎng)站百度官網(wǎng)認(rèn)證
  • 如何介紹網(wǎng)站模板下載seo診斷方法步驟
  • 中國建設(shè)銀行網(wǎng)站官網(wǎng)網(wǎng)址關(guān)鍵詞快速優(yōu)化排名軟件
  • 廈門網(wǎng)站免費(fèi)制作百度優(yōu)化培訓(xùn)
  • 特產(chǎn)網(wǎng)站怎么做宣傳推廣方式
  • 阿里云服務(wù)器可以做網(wǎng)站嗎臨沂seo代理商
  • 提供做網(wǎng)站費(fèi)用百度指數(shù)資訊指數(shù)
  • 電商平臺有哪些公司湖北網(wǎng)站seo設(shè)計(jì)
  • 網(wǎng)站開發(fā)項(xiàng)目團(tuán)隊(duì)考研培訓(xùn)機(jī)構(gòu)排名前五的機(jī)構(gòu)
  • 網(wǎng)站購物車js代碼怎么做他達(dá)拉非片
  • 哪個網(wǎng)站做畫冊牛逼網(wǎng)頁制作
  • 網(wǎng)站seo關(guān)鍵字discuz論壇seo設(shè)置
  • 專業(yè)APP客戶端做網(wǎng)站蘇州首頁關(guān)鍵詞優(yōu)化
  • 如何做一個網(wǎng)站營銷策劃方案1000例
  • 網(wǎng)站域名禁止續(xù)費(fèi)自助建站系統(tǒng)源碼
  • 青島建站模板制作什么平臺打廣告比較好免費(fèi)的
  • 珠海 網(wǎng)站 設(shè)計(jì)百度收錄查詢
  • 做pc端網(wǎng)站訊息上海廣告公司
  • 網(wǎng)站建設(shè)排名奉節(jié)縣關(guān)鍵詞seo排名優(yōu)化
  • 番禺人才網(wǎng)賬號是什么南昌seo網(wǎng)站推廣
  • 網(wǎng)站建設(shè) 長安淄博網(wǎng)站優(yōu)化