Post summary
The post summary component summarizes various content and associated meta data into a highly configurable component.
Classes
| Class | Description | Parent | Modifies |
|---|---|---|---|
.s-post-summary | Base parent container for a post summary. | N/A | N/A |
.s-post-summary--answers | Container for the post summary answers. | .s-post-summary | N/A |
.s-post-summary--content | Container for the post summary content. | .s-post-summary | N/A |
.s-post-summary--stats | Container for the post summary stats. | .s-post-summary | N/A |
.s-post-summary--tags | Container for the post summary tags. | .s-post-summary | N/A |
.s-post-summary--title | Container for the post summary title. | .s-post-summary | N/A |
.s-post-summary--answer | Container for a post summary answer. | .s-post-summary--answers | N/A |
.s-post-summary--content-meta | A container for post meta data, things like tags and user cards. | .s-post-summary--content | N/A |
.s-post-summary--content-type | Container for the post summary content type. | .s-post-summary--content | N/A |
.s-post-summary--excerpt | Container for the post summary excerpt. | .s-post-summary--content | N/A |
.s-post-summary--stats-answers | Container for the post summary answers stat. | .s-post-summary--stats | N/A |
.s-post-summary--stats-bounty | Container for the post summary bounty stat. | .s-post-summary--stats | N/A |
.s-post-summary--stats-item | A generic container for views, comments, read time, and other meta data which prepends a separator icon. | .s-post-summary--stats | N/A |
.s-post-summary--stats-votes | Container for the post summary votes stat. | .s-post-summary--stats | N/A |
.s-post-summary--title-link | Link styling for the post summary title. | .s-post-summary--title | N/A |
.s-post-summary--title-icon | Icon styling for the post summary title. | .s-post-summary--title | N/A |
.s-post-summary--sm-hide | Hides the element on small screens. | N/A | .s-post-summary > * |
.s-post-summary--sm-show | Shows the element on small screens. | N/A | .s-post-summary > * |
.s-post-summary__answered | Adds the styling necessary for a question with an accepted answer. | N/A | .s-post-summary |
.s-post-summary__deleted | Adds the styling necessary for a deleted post. | N/A | .s-post-summary |
.s-post-summary--answer__accepted | Adds the styling necessary for an accepted answer. | N/A | .s-post-summary--answer |
Examples
Base
Use the post summary component to provide a concise summary of a question, article, or other content.
<div class="s-post-summary">
<div class="s-post-summary--stats s-post-summary--sm-hide">
<div class="s-post-summary--stats-votes">…</div>
<div class="s-post-summary--stats-answers">…</div>
</div>
<div class="s-post-summary--content">
<div class="s-post-summary--content-meta">
<div class="s-user-card s-user-card__sm">…</div>
<div class="s-post-summary--stats s-post-summary--sm-show">…</div>
<div class="s-post-summary--stats-item">… views</div>
</div>
<div class="s-post-summary--title">
<a class="s-post-summary--title-link" href="…">…</a>
</div>
<p class="s-post-summary--excerpt v-truncate3">…</p>
<div class="s-post-summary--tags">
<a class="s-tag" href="…">…</a>
</div>
</div>
</div> I have built a Retrieval-Augmented Generation (RAG) system using LangChain, a vector database, and an open-source LLM. While it works reasonably well, the model often hallucinates answers or cites sources that are only tangentially related to the user's query. My chunking strategy is set to a chunk size of 1000 tokens, which seems to be the sweet spot for the model.
Answered
Add the .s-post-summary__answered modifier class to indicate that the post has an accepted answer.
<div class="s-post-summary s-post-summary__answered">
…
</div> I have built a Retrieval-Augmented Generation (RAG) system using LangChain, a vector database, and an open-source LLM. While it works reasonably well, the model often hallucinates answers or cites sources that are only tangentially related to the user's query. My chunking strategy is set to a chunk size of 1000 tokens, which seems to be the sweet spot for the model.
Bountied
Include the .s-post-summary--stats-bounty element to indicate that the post has a bounty.
<div class="s-post-summary">
<div class="s-post-summary--stats s-post-summary--sm-hide">
<div class="s-post-summary--stats-votes">…</div>
<div class="s-post-summary--stats-answers">…</div>
<div class="s-post-summary--stats-bounty">
+50 <span class="v-visible-sr">bounty</span>
</div>
</div>
…
</div> I have built a Retrieval-Augmented Generation (RAG) system using LangChain, a vector database, and an open-source LLM. While it works reasonably well, the model often hallucinates answers or cites sources that are only tangentially related to the user's query. My chunking strategy is set to a chunk size of 1000 tokens, which seems to be the sweet spot for the model.
Ignored
Including an ignored tag will automatically apply custom ignored styling to the post summary.
<div class="s-post-summary">
…
<div class="s-post-summary--tags">
<a class="s-tag s-tag__ignored" href="…">…</a>
…
</div>
</div> I have built a Retrieval-Augmented Generation (RAG) system using LangChain, a vector database, and an open-source LLM. While it works reasonably well, the model often hallucinates answers or cites sources that are only tangentially related to the user's query. My chunking strategy is set to a chunk size of 1000 tokens, which seems to be the sweet spot for the model.
Watched
Including a watched tag will automatically apply custom watched styling to the post summary.
<div class="s-post-summary">
…
<div class="s-post-summary--tags">
<a class="s-tag s-tag__watched" href="…">…</a>
…
</div>
</div> I have built a Retrieval-Augmented Generation (RAG) system using LangChain, a vector database, and an open-source LLM. While it works reasonably well, the model often hallucinates answers or cites sources that are only tangentially related to the user's query. My chunking strategy is set to a chunk size of 1000 tokens, which seems to be the sweet spot for the model.
Deleted
Include the .s-post-summary__deleted modifier class applies custom deleted styling to the post summary.
<div class="s-post-summary s-post-summary__deleted">
…
</div> I have built a Retrieval-Augmented Generation (RAG) system using LangChain, a vector database, and an open-source LLM. While it works reasonably well, the model often hallucinates answers or cites sources that are only tangentially related to the user's query. My chunking strategy is set to a chunk size of 1000 tokens, which seems to be the sweet spot for the model.
State badges
Include the appropriate state badge to indicate the current state of the post.
<!-- Draft -->
<div class="s-post-summary">
<div class="s-post-summary--content">
<div class="s-post-summary--sm-show">
<span class="s-badge s-badge__sm s-badge__info">Draft</span>
</div>
<div class="s-post-summary--content-meta">
…
<span class="s-badge s-badge__info ml-auto s-post-summary--sm-hide">Draft</span>
</div>
…
</div>
</div> I have built a Retrieval-Augmented Generation (RAG) system using LangChain, a vector database, and an open-source LLM. While it works reasonably well, the model often hallucinates answers or cites sources that are only tangentially related to the user's query. My chunking strategy is set to a chunk size of 1000 tokens, which seems to be the sweet spot for the model.
I have built a Retrieval-Augmented Generation (RAG) system using LangChain, a vector database, and an open-source LLM. While it works reasonably well, the model often hallucinates answers or cites sources that are only tangentially related to the user's query. My chunking strategy is set to a chunk size of 1000 tokens, which seems to be the sweet spot for the model.
I have built a Retrieval-Augmented Generation (RAG) system using LangChain, a vector database, and an open-source LLM. While it works reasonably well, the model often hallucinates answers or cites sources that are only tangentially related to the user's query. My chunking strategy is set to a chunk size of 1000 tokens, which seems to be the sweet spot for the model.
I have built a Retrieval-Augmented Generation (RAG) system using LangChain, a vector database, and an open-source LLM. While it works reasonably well, the model often hallucinates answers or cites sources that are only tangentially related to the user's query. My chunking strategy is set to a chunk size of 1000 tokens, which seems to be the sweet spot for the model.
I have built a Retrieval-Augmented Generation (RAG) system using LangChain, a vector database, and an open-source LLM. While it works reasonably well, the model often hallucinates answers or cites sources that are only tangentially related to the user's query. My chunking strategy is set to a chunk size of 1000 tokens, which seems to be the sweet spot for the model.
Content types
Include the appropriate content type badge to indicate the type of content the post represents.
<!-- Announcement -->
<div class="s-post-summary">
…
<div class="s-post-summary--content">
…
<div class="s-post-summary--tags">
<a class="s-post-summary--content-type" href="#">Announcement</a>
…
</div>
</div>
</div> I have built a Retrieval-Augmented Generation (RAG) system using LangChain, a vector database, and an open-source LLM. While it works reasonably well, the model often hallucinates answers or cites sources that are only tangentially related to the user's query. My chunking strategy is set to a chunk size of 1000 tokens, which seems to be the sweet spot for the model.
I have built a Retrieval-Augmented Generation (RAG) system using LangChain, a vector database, and an open-source LLM. While it works reasonably well, the model often hallucinates answers or cites sources that are only tangentially related to the user's query. My chunking strategy is set to a chunk size of 1000 tokens, which seems to be the sweet spot for the model.
I have built a Retrieval-Augmented Generation (RAG) system using LangChain, a vector database, and an open-source LLM. While it works reasonably well, the model often hallucinates answers or cites sources that are only tangentially related to the user's query. My chunking strategy is set to a chunk size of 1000 tokens, which seems to be the sweet spot for the model.
I have built a Retrieval-Augmented Generation (RAG) system using LangChain, a vector database, and an open-source LLM. While it works reasonably well, the model often hallucinates answers or cites sources that are only tangentially related to the user's query. My chunking strategy is set to a chunk size of 1000 tokens, which seems to be the sweet spot for the model.
Excerpt sizes
Post summaries can be shown without an excerpt or with an excerpt with one, two, or three lines of text. Exclude the excerpt container to hide the excerpt or apply the appropriate truncation class to the excerpt container. See also Truncation.
Classes
| Class | Description |
|---|---|
.v-truncate1 | Truncates the excerpt to 1 line of text. |
.v-truncate2 | Truncates the excerpt to 2 lines of text. |
.v-truncate3 | Truncates the excerpt to 3 lines of text. |
Examples
<!-- No excerpt -->
<div class="s-post-summary">…</div>
<!-- 1-line excerpt -->
<div class="s-post-summary">
…
<p class="s-post-summary--excerpt v-truncate1">…</p>
…
</div>
<!-- 2-line excerpt -->
<div class="s-post-summary">
…
<p class="s-post-summary--excerpt v-truncate2">…</p>
…
</div>
<!-- 3-line excerpt -->
<div class="s-post-summary">
…
<p class="s-post-summary--excerpt v-truncate3">…</p>
…
</div> I have built a Retrieval-Augmented Generation (RAG) system using LangChain, a vector database, and an open-source LLM. While it works reasonably well, the model often hallucinates answers or cites sources that are only tangentially related to the user's query. My chunking strategy is set to a chunk size of 1000 tokens, which seems to be the sweet spot for the model.
I have built a Retrieval-Augmented Generation (RAG) system using LangChain, a vector database, and an open-source LLM. While it works reasonably well, the model often hallucinates answers or cites sources that are only tangentially related to the user's query. My chunking strategy is set to a chunk size of 1000 tokens, which seems to be the sweet spot for the model.
I have built a Retrieval-Augmented Generation (RAG) system using LangChain, a vector database, and an open-source LLM. While it works reasonably well, the model often hallucinates answers or cites sources that are only tangentially related to the user's query. My chunking strategy is set to a chunk size of 1000 tokens, which seems to be the sweet spot for the model.
Small container
Post summaries adapt to their container size. When shown with a container smaller than 448px, the post summary renders with a compact layout.
<div class="s-post-summary">…</div> I have built a Retrieval-Augmented Generation (RAG) system using LangChain, a vector database, and an open-source LLM. While it works reasonably well, the model often hallucinates answers or cites sources that are only tangentially related to the user's query. My chunking strategy is set to a chunk size of 1000 tokens, which seems to be the sweet spot for the model.
I have built a Retrieval-Augmented Generation (RAG) system using LangChain, a vector database, and an open-source LLM. While it works reasonably well, the model often hallucinates answers or cites sources that are only tangentially related to the user's query. My chunking strategy is set to a chunk size of 1000 tokens, which seems to be the sweet spot for the model.
I have built a Retrieval-Augmented Generation (RAG) system using LangChain, a vector database, and an open-source LLM. While it works reasonably well, the model often hallucinates answers or cites sources that are only tangentially related to the user's query. My chunking strategy is set to a chunk size of 1000 tokens, which seems to be the sweet spot for the model.
Answers
Answers to a question can be shown in a post summary. Include the .s-post-summary--answers container to show the answers.
For accepted answers, add the .s-post-summary--answer__accepted modifier class and display the Accepted answer text and icon as shown in the example below.
<div class="s-post-summary">
…
<div class="s-post-summary--answers">
<div class="s-post-summary--answer s-post-summary--answer__accepted">
…
</div>
<div class="s-post-summary--answer">
…
</div>
</div>
</div> I have built a Retrieval-Augmented Generation (RAG) system using LangChain, a vector database, and an open-source LLM. While it works reasonably well, the model often hallucinates answers or cites sources that are only tangentially related to the user's query. My chunking strategy is set to a chunk size of 1000 tokens, which seems to be the sweet spot for the model.
I have built a Retrieval-Augmented Generation (RAG) system using LangChain, a vector database, and an open-source LLM. While it works reasonably well, the model often hallucinates answers or cites sources that are only tangentially related to the user's query. My chunking strategy is set to a chunk size of 1000 tokens, which seems to be the sweet spot for the model.
I have built a Retrieval-Augmented Generation (RAG) system using LangChain, a vector database, and an open-source LLM. While it works reasonably well, the model often hallucinates answers or cites sources that are only tangentially related to the user's query. My chunking strategy is set to a chunk size of 1000 tokens, which seems to be the sweet spot for the model.