- Graphwise Platform Documentation
- Graphwise Documentation
- Graphwise Release Notes
- GraphRAG 1.1.0 Release Notes
GraphRAG 1.1.0 Release Notes
26/03/2026
These release notes outline the new features, enhancements, and updated compatibility information for the GraphRAG application, minor version 1.1.0.
Conversation Title Editing via API
Previously, conversation titles were automatically generated from the first question and could not be changed. This enhancement improves conversation organization. The following new endpoint has been introduced to allow clients to change the title of an existing conversation:
PATCH /conversations/{id}/title?title={new_title}
Custom Parameter
The new parameter custom has been added to the chat endpoint to forward and persist any type of custom data for a question. This allows clients to attach arbitrary metadata (such as session context, tracking IDs, or application-specific flags) to individual questions without requiring changes to the core API contract. Custom data is stored with the conversation, supporting downstream analytics, auditability, and client-specific processing logic.
Responsive Design Support
Previously, the GraphRAG UI was optimized only for desktop browsers, limiting usability for stakeholders accessing the application from tablets or mobile devices. This enhancement broadens accessibility across form factors, supporting field-based and on-the-go usage scenarios without requiring a dedicated mobile application.
Additional Vector Store Default Presets
In addition to existing support for Elasticsearch and OpenSearch, GraphRAG now includes custom vector store presets, making it easier to integrate with any vector store accessible through the Components Service API and the main native n8n vector search nodes.
New Server-Sent Event Message Types
Two new message types have been added to the streaming contract to support observability and debugging:
TOKEN_USAGE_DETAILS: Provides token consumption details for each step and the total usage per question.INTROSPECTION: Contains debug and trace data, such as the main workflow execution ID.
The workflow engine now includes 27 organized workflows structured within a four-stage pipeline: Pre-processing → Data Gathering → Main Answer → Post-Answer Processing.
Key workflow architectural updates for this release include but are not limited to:
Parallel Step Framework: Data gathering and post-answer processing steps now run concurrently. A configurable timeout (
parallelStepsMaxTimeout) is applied, and individual steps can be selectively enabled or disabled.LLM Call Workflow: A centralized workflow for Large Language Model invocation features a primary/fallback model pattern, structured output validation, and a three-retry policy with a 500 ms waiting period.
Increased centralization and parameterization of environment variables within the configuration workflow (including URLs, Model IDs, Vector Store Presets, concept properties, and GraphDB MCP environment variables).
Robust Error Handling: A six-layer strategy has been implemented, covering workflow-level error workflows, parallel step isolation, node-level retries, fallback model usage, configurable continue-on-error paths, and structured output repair mechanisms.
Short-Term Memory Compression: Conversation context is automatically compressed via an LLM when it exceeds the configured token threshold (
shortMemoryMaxUncompressedSizeInTokens).
Note
A workflow changelog will be provided with future releases to clarify the exact changes made to the workflows.