Post summary

The post summary component summarizes various content and associated meta data into a highly configurable component.

Classes

ClassDescriptionParentModifies
.s-post-summaryBase parent container for a post summary.N/AN/A
.s-post-summary--answersContainer for the post summary answers..s-post-summaryN/A
.s-post-summary--contentContainer for the post summary content..s-post-summaryN/A
.s-post-summary--statsContainer for the post summary stats..s-post-summaryN/A
.s-post-summary--tagsContainer for the post summary tags..s-post-summaryN/A
.s-post-summary--titleContainer for the post summary title..s-post-summaryN/A
.s-post-summary--answerContainer for a post summary answer..s-post-summary--answersN/A
.s-post-summary--content-metaA container for post meta data, things like tags and user cards..s-post-summary--contentN/A
.s-post-summary--content-typeContainer for the post summary content type..s-post-summary--contentN/A
.s-post-summary--excerptContainer for the post summary excerpt..s-post-summary--contentN/A
.s-post-summary--stats-answersContainer for the post summary answers stat..s-post-summary--statsN/A
.s-post-summary--stats-bountyContainer for the post summary bounty stat..s-post-summary--statsN/A
.s-post-summary--stats-itemA generic container for views, comments, read time, and other meta data which prepends a separator icon..s-post-summary--statsN/A
.s-post-summary--stats-votesContainer for the post summary votes stat..s-post-summary--statsN/A
.s-post-summary--title-linkLink styling for the post summary title..s-post-summary--titleN/A
.s-post-summary--title-iconIcon styling for the post summary title..s-post-summary--titleN/A
.s-post-summary--sm-hideHides the element on small screens.N/A.s-post-summary > *
.s-post-summary--sm-showShows the element on small screens.N/A.s-post-summary > *
.s-post-summary__answeredAdds the styling necessary for a question with an accepted answer.N/A.s-post-summary
.s-post-summary__deletedAdds the styling necessary for a deleted post.N/A.s-post-summary
.s-post-summary--answer__acceptedAdds the styling necessary for an accepted answer.N/A.s-post-summary--answer

Examples

Base

Use the post summary component to provide a concise summary of a question, article, or other content.

<div class="s-post-summary">
    <div class="s-post-summary--stats s-post-summary--sm-hide">
        <div class="s-post-summary--stats-votes"></div>
        <div class="s-post-summary--stats-answers"></div>
    </div>
    <div class="s-post-summary--content">
        <div class="s-post-summary--content-meta">
            <div class="s-user-card s-user-card__sm"></div>
            <div class="s-post-summary--stats s-post-summary--sm-show"></div>
            <div class="s-post-summary--stats-item">… views</div>
        </div>
        <div class="s-post-summary--title">
            <a class="s-post-summary--title-link" href="…"></a>
        </div>
        <p class="s-post-summary--excerpt v-truncate3"></p>
        <div class="s-post-summary--tags">
            <a class="s-tag" href="…"></a>
        </div>
    </div>
</div>
+24 votes
1 answer
How to reduce hallucinations and improve source relevance in a RAG pipeline?

I have built a Retrieval-Augmented Generation (RAG) system using LangChain, a vector database, and an open-source LLM. While it works reasonably well, the model often hallucinates answers or cites sources that are only tangentially related to the user's query. My chunking strategy is set to a chunk size of 1000 tokens, which seems to be the sweet spot for the model.

retrieval-augmented-generation langchain llm vector-database

Answered

Add the .s-post-summary__answered modifier class to indicate that the post has an accepted answer.

<div class="s-post-summary s-post-summary__answered"></div>
+24 votes
Has accepted answer 1 answer
How to reduce hallucinations and improve source relevance in a RAG pipeline?

I have built a Retrieval-Augmented Generation (RAG) system using LangChain, a vector database, and an open-source LLM. While it works reasonably well, the model often hallucinates answers or cites sources that are only tangentially related to the user's query. My chunking strategy is set to a chunk size of 1000 tokens, which seems to be the sweet spot for the model.

retrieval-augmented-generation langchain llm vector-database

Bountied

Include the .s-post-summary--stats-bounty element to indicate that the post has a bounty.

<div class="s-post-summary">
    <div class="s-post-summary--stats s-post-summary--sm-hide">
        <div class="s-post-summary--stats-votes"></div>
        <div class="s-post-summary--stats-answers"></div>
        <div class="s-post-summary--stats-bounty">
            +50 <span class="v-visible-sr">bounty</span>
        </div>
    </div></div>
+24 votes
1 answer
+ 50 bounty
How to reduce hallucinations and improve source relevance in a RAG pipeline?

I have built a Retrieval-Augmented Generation (RAG) system using LangChain, a vector database, and an open-source LLM. While it works reasonably well, the model often hallucinates answers or cites sources that are only tangentially related to the user's query. My chunking strategy is set to a chunk size of 1000 tokens, which seems to be the sweet spot for the model.

retrieval-augmented-generation langchain llm vector-database

Ignored

Including an ignored tag will automatically apply custom ignored styling to the post summary.

<div class="s-post-summary"><div class="s-post-summary--tags">
        <a class="s-tag s-tag__ignored" href="…"></a></div>
</div>
+24 votes
Has accepted answer 1 answer
+ 50 bounty
How to reduce hallucinations and improve source relevance in a RAG pipeline?

I have built a Retrieval-Augmented Generation (RAG) system using LangChain, a vector database, and an open-source LLM. While it works reasonably well, the model often hallucinates answers or cites sources that are only tangentially related to the user's query. My chunking strategy is set to a chunk size of 1000 tokens, which seems to be the sweet spot for the model.

retrieval-augmented-generation
Ignored tag
langchain llm vector-database ai

Watched

Including a watched tag will automatically apply custom watched styling to the post summary.

<div class="s-post-summary"><div class="s-post-summary--tags">
        <a class="s-tag s-tag__watched" href="…"></a></div>
</div>
+24 votes
Has accepted answer 1 answer
How to reduce hallucinations and improve source relevance in a RAG pipeline?

I have built a Retrieval-Augmented Generation (RAG) system using LangChain, a vector database, and an open-source LLM. While it works reasonably well, the model often hallucinates answers or cites sources that are only tangentially related to the user's query. My chunking strategy is set to a chunk size of 1000 tokens, which seems to be the sweet spot for the model.

retrieval-augmented-generation
Watched tag
langchain llm vector-database ai

Deleted

Include the .s-post-summary__deleted modifier class applies custom deleted styling to the post summary.

<div class="s-post-summary s-post-summary__deleted"></div>
+24 votes
Has accepted answer 1 answer
How to reduce hallucinations and improve source relevance in a RAG pipeline?

I have built a Retrieval-Augmented Generation (RAG) system using LangChain, a vector database, and an open-source LLM. While it works reasonably well, the model often hallucinates answers or cites sources that are only tangentially related to the user's query. My chunking strategy is set to a chunk size of 1000 tokens, which seems to be the sweet spot for the model.

retrieval-augmented-generation langchain llm vector-database

State badges

Include the appropriate state badge to indicate the current state of the post.

<!-- Draft -->
<div class="s-post-summary">
    <div class="s-post-summary--content">
        <div class="s-post-summary--sm-show">
            <span class="s-badge s-badge__sm s-badge__info">Draft</span>
        </div>
        <div class="s-post-summary--content-meta"><span class="s-badge s-badge__info ml-auto s-post-summary--sm-hide">Draft</span>
        </div></div>
</div>
+24 votes
1 answer
Draft
How to reduce hallucinations and improve source relevance in a RAG pipeline?

I have built a Retrieval-Augmented Generation (RAG) system using LangChain, a vector database, and an open-source LLM. While it works reasonably well, the model often hallucinates answers or cites sources that are only tangentially related to the user's query. My chunking strategy is set to a chunk size of 1000 tokens, which seems to be the sweet spot for the model.

retrieval-augmented-generation langchain llm vector-database
+24 votes
1 answer
Review
How to reduce hallucinations and improve source relevance in a RAG pipeline?

I have built a Retrieval-Augmented Generation (RAG) system using LangChain, a vector database, and an open-source LLM. While it works reasonably well, the model often hallucinates answers or cites sources that are only tangentially related to the user's query. My chunking strategy is set to a chunk size of 1000 tokens, which seems to be the sweet spot for the model.

retrieval-augmented-generation langchain llm vector-database
+24 votes
1 answer
Closed
How to reduce hallucinations and improve source relevance in a RAG pipeline?

I have built a Retrieval-Augmented Generation (RAG) system using LangChain, a vector database, and an open-source LLM. While it works reasonably well, the model often hallucinates answers or cites sources that are only tangentially related to the user's query. My chunking strategy is set to a chunk size of 1000 tokens, which seems to be the sweet spot for the model.

retrieval-augmented-generation langchain llm vector-database
+24 votes
1 answer
Archived
How to reduce hallucinations and improve source relevance in a RAG pipeline?

I have built a Retrieval-Augmented Generation (RAG) system using LangChain, a vector database, and an open-source LLM. While it works reasonably well, the model often hallucinates answers or cites sources that are only tangentially related to the user's query. My chunking strategy is set to a chunk size of 1000 tokens, which seems to be the sweet spot for the model.

retrieval-augmented-generation langchain llm vector-database
+24 votes
1 answer
Pinned
How to reduce hallucinations and improve source relevance in a RAG pipeline?

I have built a Retrieval-Augmented Generation (RAG) system using LangChain, a vector database, and an open-source LLM. While it works reasonably well, the model often hallucinates answers or cites sources that are only tangentially related to the user's query. My chunking strategy is set to a chunk size of 1000 tokens, which seems to be the sweet spot for the model.

retrieval-augmented-generation langchain llm vector-database

Content types

Include the appropriate content type badge to indicate the type of content the post represents.

<!-- Announcement -->
<div class="s-post-summary"><div class="s-post-summary--content"><div class="s-post-summary--tags">
            <a class="s-post-summary--content-type" href="#">Announcement</a></div>
    </div>
</div>
+24 votes
1 answer
How to reduce hallucinations and improve source relevance in a RAG pipeline?

I have built a Retrieval-Augmented Generation (RAG) system using LangChain, a vector database, and an open-source LLM. While it works reasonably well, the model often hallucinates answers or cites sources that are only tangentially related to the user's query. My chunking strategy is set to a chunk size of 1000 tokens, which seems to be the sweet spot for the model.

Announcement
retrieval-augmented-generation langchain
+24 votes
1 answer
How to reduce hallucinations and improve source relevance in a RAG pipeline?

I have built a Retrieval-Augmented Generation (RAG) system using LangChain, a vector database, and an open-source LLM. While it works reasonably well, the model often hallucinates answers or cites sources that are only tangentially related to the user's query. My chunking strategy is set to a chunk size of 1000 tokens, which seems to be the sweet spot for the model.

How-to guide
retrieval-augmented-generation langchain
+24 votes
1 answer
How to reduce hallucinations and improve source relevance in a RAG pipeline?

I have built a Retrieval-Augmented Generation (RAG) system using LangChain, a vector database, and an open-source LLM. While it works reasonably well, the model often hallucinates answers or cites sources that are only tangentially related to the user's query. My chunking strategy is set to a chunk size of 1000 tokens, which seems to be the sweet spot for the model.

Knowledge article
retrieval-augmented-generation langchain
+24 votes
1 answer
How to reduce hallucinations and improve source relevance in a RAG pipeline?

I have built a Retrieval-Augmented Generation (RAG) system using LangChain, a vector database, and an open-source LLM. While it works reasonably well, the model often hallucinates answers or cites sources that are only tangentially related to the user's query. My chunking strategy is set to a chunk size of 1000 tokens, which seems to be the sweet spot for the model.

Policy
retrieval-augmented-generation langchain

Excerpt sizes

Post summaries can be shown without an excerpt or with an excerpt with one, two, or three lines of text. Exclude the excerpt container to hide the excerpt or apply the appropriate truncation class to the excerpt container. See also Truncation.

Classes

ClassDescription
.v-truncate1Truncates the excerpt to 1 line of text.
.v-truncate2Truncates the excerpt to 2 lines of text.
.v-truncate3Truncates the excerpt to 3 lines of text.

Examples

<!-- No excerpt -->
<div class="s-post-summary"></div>

<!-- 1-line excerpt -->
<div class="s-post-summary"><p class="s-post-summary--excerpt v-truncate1"></p></div>

<!-- 2-line excerpt -->
<div class="s-post-summary"><p class="s-post-summary--excerpt v-truncate2"></p></div>

<!-- 3-line excerpt -->
<div class="s-post-summary"><p class="s-post-summary--excerpt v-truncate3"></p></div>
Small (1 line)
+24 votes
1 answer
How to reduce hallucinations and improve source relevance in a RAG pipeline?

I have built a Retrieval-Augmented Generation (RAG) system using LangChain, a vector database, and an open-source LLM. While it works reasonably well, the model often hallucinates answers or cites sources that are only tangentially related to the user's query. My chunking strategy is set to a chunk size of 1000 tokens, which seems to be the sweet spot for the model.

retrieval-augmented-generation langchain llm vector-database
Medium (2 lines)
+24 votes
1 answer
How to reduce hallucinations and improve source relevance in a RAG pipeline?

I have built a Retrieval-Augmented Generation (RAG) system using LangChain, a vector database, and an open-source LLM. While it works reasonably well, the model often hallucinates answers or cites sources that are only tangentially related to the user's query. My chunking strategy is set to a chunk size of 1000 tokens, which seems to be the sweet spot for the model.

retrieval-augmented-generation langchain llm vector-database
Large (3 lines)
+24 votes
1 answer
How to reduce hallucinations and improve source relevance in a RAG pipeline?

I have built a Retrieval-Augmented Generation (RAG) system using LangChain, a vector database, and an open-source LLM. While it works reasonably well, the model often hallucinates answers or cites sources that are only tangentially related to the user's query. My chunking strategy is set to a chunk size of 1000 tokens, which seems to be the sweet spot for the model.

retrieval-augmented-generation langchain llm vector-database

Small container

Post summaries adapt to their container size. When shown with a container smaller than 448px, the post summary renders with a compact layout.

<div class="s-post-summary"></div>
+24 votes
1 answer
How to reduce hallucinations and improve source relevance in a RAG pipeline?

I have built a Retrieval-Augmented Generation (RAG) system using LangChain, a vector database, and an open-source LLM. While it works reasonably well, the model often hallucinates answers or cites sources that are only tangentially related to the user's query. My chunking strategy is set to a chunk size of 1000 tokens, which seems to be the sweet spot for the model.

retrieval-augmented-generation langchain
+24 votes
Has accepted answer 1 answer
How to reduce hallucinations and improve source relevance in a RAG pipeline?

I have built a Retrieval-Augmented Generation (RAG) system using LangChain, a vector database, and an open-source LLM. While it works reasonably well, the model often hallucinates answers or cites sources that are only tangentially related to the user's query. My chunking strategy is set to a chunk size of 1000 tokens, which seems to be the sweet spot for the model.

retrieval-augmented-generation langchain
+24 votes
1 answer
+ 50 bounty
How to reduce hallucinations and improve source relevance in a RAG pipeline?

I have built a Retrieval-Augmented Generation (RAG) system using LangChain, a vector database, and an open-source LLM. While it works reasonably well, the model often hallucinates answers or cites sources that are only tangentially related to the user's query. My chunking strategy is set to a chunk size of 1000 tokens, which seems to be the sweet spot for the model.

retrieval-augmented-generation langchain

Answers

Answers to a question can be shown in a post summary. Include the .s-post-summary--answers container to show the answers.

For accepted answers, add the .s-post-summary--answer__accepted modifier class and display the Accepted answer text and icon as shown in the example below.

<div class="s-post-summary"><div class="s-post-summary--answers">
        <div class="s-post-summary--answer s-post-summary--answer__accepted"></div>
        <div class="s-post-summary--answer"></div>
    </div>
</div>
+24 votes
Has accepted answer 2 answers
How to reduce hallucinations and improve source relevance in a RAG pipeline?

I have built a Retrieval-Augmented Generation (RAG) system using LangChain, a vector database, and an open-source LLM. While it works reasonably well, the model often hallucinates answers or cites sources that are only tangentially related to the user's query. My chunking strategy is set to a chunk size of 1000 tokens, which seems to be the sweet spot for the model.

retrieval-augmented-generation langchain llm vector-database

I have built a Retrieval-Augmented Generation (RAG) system using LangChain, a vector database, and an open-source LLM. While it works reasonably well, the model often hallucinates answers or cites sources that are only tangentially related to the user's query. My chunking strategy is set to a chunk size of 1000 tokens, which seems to be the sweet spot for the model.

I have built a Retrieval-Augmented Generation (RAG) system using LangChain, a vector database, and an open-source LLM. While it works reasonably well, the model often hallucinates answers or cites sources that are only tangentially related to the user's query. My chunking strategy is set to a chunk size of 1000 tokens, which seems to be the sweet spot for the model.