KS
Killer-Skills

start-dhti — how to use start-dhti how to use start-dhti, start-dhti setup guide, GenAI healthcare application development, DHTI development server, start-dhti alternative, start-dhti vs other GenAI platforms, installing start-dhti, start-dhti and Node.js, Docker for start-dhti deployment

v1.0.0
GitHub

About this Skill

Perfect for GenAI Healthcare Agents needing rapid prototyping of backend and frontend applications with Node.js and Docker start-dhti is a reference architecture for Gen AI in healthcare and a skill platform for vibe coding, facilitating the installation of elixirs and conches for testing and prototyping.

Features

Installs elixirs and conches created by other skills for seamless integration
Sets up a fully functional DHTI development server for testing and prototyping using Node.js and Docker
Rapidly prototypes GenAI healthcare applications with both backend and frontend components
Utilizes Docker and Docker Compose for efficient containerization and deployment
Requires Node.js (>= 18.0.0) for compatibility and optimal performance

# Core Topics

dermatologist dermatologist
[0]
[0]
Updated: 3/6/2026

Quality Score

Top 5%
60
Excellent
Based on code quality & docs
Installation
SYS Universal Install (Auto-Detect)
Cursor IDE Windsurf IDE VS Code IDE
> npx killer-skills add dermatologist/dhti/start-dhti

Agent Capability Analysis

The start-dhti MCP Server by dermatologist is an open-source Categories.community integration for Claude and other AI agents, enabling seamless task automation and capability expansion. Optimized for how to use start-dhti, start-dhti setup guide, GenAI healthcare application development.

Ideal Agent Persona

Perfect for GenAI Healthcare Agents needing rapid prototyping of backend and frontend applications with Node.js and Docker

Core Value

Empowers agents to rapidly prototype GenAI healthcare applications with both backend and frontend components, utilizing Node.js and Docker for efficient development, and enabling the installation of elixirs and conches created by other skills

Capabilities Granted for start-dhti MCP Server

Installing elixirs and conches for customized healthcare applications
Setting up a fully functional DHTI development server for testing and prototyping
Rapidly prototyping GenAI healthcare applications with integrated backend and frontend components

! Prerequisites & Limits

  • Requires Node.js (>= 18.0.0) installation
  • Docker and Docker Compose must be installed and running
  • Sufficient disk space for Docker images is necessary
Project
SKILL.md
9.1 KB
.cursorrules
1.2 KB
package.json
240 B
Ready
UTF-8

# Tags

[No tags]
SKILL.md
Readonly

When to Use This Skill

Use this skill when you need to:

  • Install elixirs and conches created by other skills
  • Set up a fully functional DHTI development server for testing and prototyping
  • Rapidly prototype GenAI healthcare applications with both backend and frontend components

Prerequisites

Before using this skill, ensure you have:

  • Node.js (>= 18.0.0) installed
  • Docker and Docker Compose installed and running
  • Sufficient disk space for Docker images

Instructions

You are a DHTI orchestration agent working to create a complete GenAI healthcare application. Follow these instructions sequentially. The user conversation may provide context on the work you have done in the past. Always internalize that and reuse it where possible.

Phase 1: Environment Setup

  1. Verify Prerequisites:

    • Check that Node.js and Docker are installed
    • Verify Docker is running: docker ps
  2. Directory Structure

workspace/
└── dhti-elixir/
    └── packages/
        └── <name>/                # Your generated elixir package
            ├── notes/todo.md          # Your detailed TODO list and plan
            ├── src/dhti_elixir_<name>/   # Main code for the elixir
            │   ├── chain.py              # Main chain implementation
            │   ├── bootstrap.py          # Bootstrap/configuration file
            │   ├── README.md             # Documentation for your elixir
            │   └── (other files, e.g., __init__.py, utils.py)
            ├── pyproject.toml            # Add new dependencies here
            └── tests/
                └── test_chain.py         # Example test file for chain.py
└── openmrs-esm-dhti/
    └── packages/
        └── esm-dhti-<name>/           # Your generated elixir package
            ├── notes/todo.md          # Your detailed TODO list and plan
            ├── src/   # Main code for the conch with multiple subdirectories.
            ├── package.json          # Update dependencies and metadata here
            └── README.md             # Documentation for your conch

Phase 2: Set Up DHTI Infrastructure

  1. Create Docker Compose Configuration:

    bash
    1npx dhti-cli compose add -m langserve

    This creates a Docker Compose configuration with:

    • LangServe (GenAI backend)
    • Other necessary infrastructure

    Optionally add additional modules as needed:

    • -m ollama for local LLM hosting
    • -m langfuse for monitoring
    • -m redis for vector store
    • -m neo4j for graph utilities
    • -m cqlFhir for CQL support
    • -m mcpFhir for MCP server
  2. Verify Compose Configuration:

    bash
    1npx dhti-cli compose read

Phase 3: Install Elixir from Local Directory (If Applicable)

  1. Install the Generated Elixir:

    • Install the elixir from workspace directory in the current path.
    • If workspace directory does not exist, use the path provided in the original user prompt.
    • Use the -l flag to specify the directory.
    • Use -n flag with the elixir's project name (starting with dhti-elixir-)
    bash
    1npx dhti-cli elixir install -l <elixir-path> -n <elixir-name>

    Example:

    bash
    1npx dhti-cli elixir install -l workspace/dhti-elixir/packages/glycemic_advisor -n glycemic_advisor
  2. Build Elixir Docker Image:

    bash
    1npx dhti-cli docker -n dhti/genai-test:1.0 -t elixir

    Replace dhti with your Docker Hub username or registry name if available from the original user prompt.

Phase 4: Install Conch from Local Directory (If Applicable)

  1. Install the Generated Conch:

    • Install the conch from workspace directory in the current path.
    • If workspace directory does not exist, use the path provided in the original user prompt.
    • Use the -l flag to install the conch from the directory.
    • Use -n flag with the conch's project name (starting with openmrs-esm-dhti-)
    bash
    1npx dhti-cli conch install -l <conch-path> -n <conch-name>

    Example:

    bash
    1npx dhti-cli conch install -l workspace/openmrs-esm-dhti/packages/esm-glycemic-advisor -n esm-glycemic-advisor

Hot Reload / Dev Sync (If Applicable)

Use these commands to sync work-in-progress code/assets into running containers. This is ideal for UI tweaks and elixir logic changes that don't alter dependencies.

  • Elixir dev sync
bash
1npx dhti-cli elixir dev -d <<workspace>>/dhti-elixir/packages/<<elixir-name>> -n <<elixir-name>> -c dhti-langserve-1

Note: If dependencies change (e.g., requirements.txt or pyproject.toml), rebuild the image instead of using dev sync.

  • Conch dev sync
bash
1npx dhti-cli conch dev -d <<workspace>>/openmrs-esm-dhti/packages/<<conch-name>> -n <<conch-name>> -c dhti-frontend-1

Tip: Clear your browser cache if assets look stale after syncing.

  • Update runtime bootstrap
bash
1npx dhti-cli docker bootstrap -f <<workspace>>/bootstrap.py -c dhti-langserve-1

When run for the first time, this command will copy the bootstrap file into the langserve container to workspace. Subsequent runs will sync changes from the local bootstrap file to the container, allowing you to update runtime configurations without rebuilding the image. This is especially useful for tweaking model settings, tool configurations, or other parameters defined in the bootstrap file during development.

Phase 5: Start DHTI Server

  1. Start All Services:
    bash
    1npx dhti-cli docker -u

Phase 6: Decide whether to use OpenMRS or CDS-Hooks container.

If a conch is installed, then you should use OpenMRS container to see the conch in action. If no conch is installed, you can choose to use CDS-Hooks container to test the elixir functionality.

If using OpenMRS container:

npx dhti-cli conch install # if you have not performed this step before.
npx dhti-cli conch start -s packages/<<conch-name>>

Example:

npx dhti-cli conch install # if you have not performed this step before.
npx dhti-cli conch start -s packages/esm-chatbot-agent
  • Login to OpenMRS with at http://localhost:8080/openmrs/spa/home
    • Username: admin
    • Password: Admin123

You may include multiple conches with multiple -s flags to start them at the same time.

  1. Wait for Services to Initialize:

    • Wait 2-3 minutes for all services to fully start
    • Monitor logs: docker compose logs -f
  2. Hand off with the following instructions:

    • Navigate to http://localhost:8080/openmrs/spa/home
    • Login credentials:
      • Username: admin
      • Password: Admin123

If using CDS-Hooks container:

  • First kill any running processes on port 8080 to free it up for CDS-Hooks container. Then start the CDS-Hooks container with the elixir.
npx dhti-cli elixir start -n <<elixir-name>> &

Example:

npx dhti-cli elixir start -n glycemic_advisor &

NOTE the Application link in the output rather than the final display link.

  1. Wait for Services to Initialize:
  • Wait 2-3 minutes for all services to fully start
  • Monitor logs: docker compose logs -f
  1. Hand off with the following instructions:
  • Request the user to run npx dhti-cli elixir start -n <<elixir-name>> to start the CDS-Hooks container with the elixir.
  • Display the Application link: <<application-link>>
  • Request to open application link in browser to access it.

Phase 9: Clean Up (If Needed)

  1. Cleanup Instructions:
    • Provide commands to stop and remove containers:
      bash
      1npx dhti-cli docker -d
    • Document how to restart the server
    • Explain how to rebuild images after changes

Development Mode (Optional)

Dry-Run Mode

Before making any changes, use the --dry-run flag to preview actions:

bash
1npx dhti-cli compose add -m openmrs -m langserve --dry-run 2npx dhti-cli elixir install -l <path> -n <name> --dry-run 3npx dhti-cli conch install -l <path> -n <name> --dry-run

Troubleshooting

Common issues and solutions:

  1. Docker containers fail to start:

    • Check Docker is running: docker ps
    • Verify port availability (80, 8080, 8000)
    • Check Docker logs: docker compose logs <service-name>
  2. Elixir installation fails:

    • Verify the path exists and is accessible
    • Check that the elixir project structure is correct
    • Ensure pyproject.toml exists in the elixir directory
  3. Conch installation fails:

    • Verify the path exists and is accessible
    • Check that routes.json exists in the conch directory
    • Ensure the conch follows OpenMRS ESM structure
  4. OpenMRS not accessible:

    • Wait longer for initialization (3-5 minutes)
    • Check nginx gateway logs
    • Verify all containers are running: docker ps

Notes

  • Always use dry-run mode first to verify commands
  • Generated projects can be version controlled and shared
  • Docker images can be pushed to registries for deployment
  • The skill creates a complete, production-ready DHTI application
  • The user conversation may provide context on the work you have done in the past. Always internalize that and reuse it where possible.

Related Skills

Looking for an alternative to start-dhti or building a Categories.community AI Agent? Explore these related open-source MCP Servers.

View All

widget-generator

Logo of f
f

widget-generator is an open-source AI agent skill for creating widget plugins that are injected into prompt feeds on prompts.chat. It supports two rendering modes: standard prompt widgets using default PromptCard styling and custom render widgets built as full React components.

149.6k
0
Design

chat-sdk

Logo of lobehub
lobehub

chat-sdk is a unified TypeScript SDK for building chat bots across multiple platforms, providing a single interface for deploying bot logic.

73.0k
0
Communication

zustand

Logo of lobehub
lobehub

The ultimate space for work and life — to find, build, and collaborate with agent teammates that grow with you. We are taking agent harness to the next level — enabling multi-agent collaboration, effortless agent team design, and introducing agents as the unit of work interaction.

72.8k
0
Communication

data-fetching

Logo of lobehub
lobehub

The ultimate space for work and life — to find, build, and collaborate with agent teammates that grow with you. We are taking agent harness to the next level — enabling multi-agent collaboration, effortless agent team design, and introducing agents as the unit of work interaction.

72.8k
0
Communication