Hashicorp Unified Docs
https://github.com/hashicorp/web-unified-docs.git
[!IMPORTANT]
This README is for developers working on the documentation website. If you
want to contribute docs content, refer to the Contribute to HashiCorp
documentation guide.
hashicorp/web-unified-docs, aims to implement [[DEVDOT-023] Unified Product Documentation Repository](https://docs.google.com/document/d/1p8kOqySttvWUVfn7qiC4wGBR73LMBGMelwLt69pM3FQ/edit). The RFC for this project was intentionally light on implementation details, in order to foster consensus on the broad direction.
broken-link-check-full workflow generates comprehensive broken link reports with prioritization guidance. When contributors create PRs that modify content, the link checker shows any broken links in PR comments with actionable guidance without blocking development.
Quick tips for contributors:
The existing API (content.hashicorp.com) has endpoints that serve documentation content. You can find the source code in hashicorp/mktg-content-workflows.
The goal of the unified docs API is to host all of HashiCorp's product documentation. The unified docs API will eventually replace the existing content API.
To get a migration preview running, run make from the root of this repo. The make command starts the unified-docs Docker profile that spins up a local instance of unified-devdot-api and dev-portal.
Once this command completes, you can access the following endpoints:
dev-portal container configured to pull from the experimental docs API (this repo). This image depends on the unified docs API (unified-devdot-api).unified-devdot-api) that serves content from the content directory. On startup, this container processes the content and assets in /content into public/assets and public/content. In addition, the container also generates app/api/docsPaths.json and app/api/versionMetadata.json from the contents within /content.[!NOTE]
The unified docs API container takes time to process the content and assets. You must wait for both theunified-devdot-apianddev-portalcontainers to complete before you can successfully test content in thedev-portalpreview environment (localhost:3000). Visit http://localhost:8080/api/all-docs-paths to verify theunified-devdot-apicontainer is complete.
To spin this down gracefully, run make clean in a separate terminal.
If you wish to remove the local Docker images as well, run make clean CLEAN_OPTION=full.
The makefile serves as a convenience tool start the local preview. If you need more granular control, the package.json file contains a full list of available commands.
To use these, you will need to intentionally run npm install and npm run prebuild before anything else.
Use npm run coverage to run coverage tests.
Unified docs API serves as one of the content APIs for dev-portal (frontend application for DevDot). As a result, when implementing new features, you may need to modify both the backend (this repo) and the frontend (dev-portal).
If you are working on a ticket that requires changes to both the unified docs API and dev-portal, please set custom environment variables for your branch in Vercel to simplify testing instructions.
For example, in Vercel, for your dev-portal branch, you can set the following environment variables:
| Environment variable | Value |
|---|---|
HASHI_ENV | unified-docs-sandbox |
UNIFIED_DOCS_API | <UDR-Preview-URL> |
Reach out to team #team-web-presence if you need to do local API development
This script helps with product documentation migration to the web-unified-docs repository. When migrating documentation:
web-unified-docs repository becomes the source of truthweb-unified-docs only./scripts/update-mdx-files.sh ~/Desktop/hashicorp/terraform-plugin-framework/website/docs
Example output:
Progress:
Files processed: 135
Files updated: 135
Files with no frontmatter: 0
Files with errors: 0
Completed! All MDX files have been processed.
The repository uses a focused broken link monitoring system:
broken-link-check-full workflow generates comprehensive broken link reports. When contributors create PRs that modify content, the link checker shows any broken links in PR comments without blocking development.
For detailed information about the monitoring system, see Broken Link Monitoring Documentation.
For teams migrating products to UDR (Unified Docs Renderer), use the dedicated migration workflow:
You can also run the broken link checker locally. The following commands launch a lychee Docker container to check the content directories you specify.
Run the broken link checker on all content.
npm run broken-link
Check a specific directory within content.
npm run broken-link terraform-plugin-framework
Check multiple directories.
npm run broken-link terraform-plugin-framework-log terraform-plugin-mux
The following diagram illustrates the relationships between the unified docs API (this repo), dev-portal, and the existing content API:
graph LR
subgraph "Content sources (non-migrated)"
BDY[boundary]
CSL[consul]
HCP[hcp-docs]
NMD[nomad]
PKR[packer]
SNT[sentinel]
TF[terraform]
TFC[terraform-cdk]
TFA[terraform-docs-agents]
TFD[terraform-docs-common]
VGT[vagrant]
VLT[vault]
WPT[waypoint]
CURALL["/content or /website"]
BDY & CSL & HCP & NMD & PKR & SNT & TF & TFC & TFA & TFD & VGT & VLT & WPT --> CURALL
end
subgraph "Migrated content repo"
TPF[terraform-plugin-framework]
TPL[terraform-plugin-log]
TPM[terraform-plugin-mux]
TPS[terraform-plugin-sdk]
TPT[terraform-plugin-testing]
TFE[terraform-enterprise]
MIGALL["/content"]
TPF & TPL & TPM & TPS & TPT & TFE --> MIGALL
end
subgraph "APIs"
CP[Content API<br>content.hashicorp.com]
UDR[Unified Docs Repository<br>web-unified-docs]
end
subgraph "Frontend"
DP[Dev Portal<br>dev-portal]
end
%% BDY & CSL & HCP & NMD & PKR & PTF & SNT & TF & TFC & TFA & TFD & VGT & VLT & WPT --> CP
%% TPF & TPL & TPM & TPS & TPT --> UDR
CURALL -->|Current content flow| CP
MIGALL -->|Migrated content| UDR
CP -->|Serves most content| DP
UDR -->|Serves unified/content content| DP
class TPF,TPL,TPM,TPS,TPT,BDY,CSL,HCP,NMD,PKR,PTF,SNT,TF,TFC,TFA,TFD,VGT,VLT,WPT productRepo
The diagram shows:
/content directory. The migrated repos will use a directory approach to versioning (rather than the historic branch and tag strategy)