Skip to main content

Configuring External LLM Services

12/01/2026

This page details the LLM integration framework in PoolParty 10.1, outlining how to use system properties to connect diverse model providers to your taxonomy development environment.

Earlier versions of PoolParty featuring the Taxonomy Advisor were limited to using Amazon Bedrock for all LLM capabilities. While this ensured stability, it restricted customers to Bedrock-specific models and prevented integration with other providers or self-hosted options.

With PoolParty 10.1 LLMs become pluggable runtime components. PoolParty supports three broad model types:

  • Cloud proprietary LLMs such as OpenAI, Anthropic, Google, Cohere, offering strong reasoning and reliability.

  • Open-source/self-hosted models such as Llama, Mistral, Qwen, Phi, suitable for full data control, private deployments, and fine-tuning.

  • Hybrid inference providers: services like TogetherAI or OctoAI that host optimized variants of open models.

This approach decouples model choice from the platform, allowing teams to adopt the LLM strategy that best fits performance, compliance, and cost requirements.

Note

An active LLM configuration is required to utilize all available Taxonomy Builder features. Graphwise provides a paid professional service to handle this setup for customers, if required. For pricing and other details, please reach out to your Graphwise representative.