Use this file to discover all available pages before exploring further.
Permify offers various options for configuring your Permify Server. Here is the example configuration YAML file with glossary below.You can also find
this example config file in Permify repo.
# The server section specifies the HTTP and gRPC server settings,# including whether or not TLS is enabled and the certificate and# key file locations.server: host: "" rate_limit: 100 http: enabled: true port: 3476 grpc_target_host: 127.0.0.1 tls: enabled: true cert: /etc/letsencrypt/live/yourdomain.com/fullchain.pem key: /etc/letsencrypt/live/yourdomain.com/privkey.pem grpc: port: 3478 tls: enabled: true cert: /etc/letsencrypt/live/yourdomain.com/fullchain.pem key: /etc/letsencrypt/live/yourdomain.com/privkey.pem# The logger section sets the logging level for the service.logger: level: info# The profiler section enables or disables the pprof profiler and# sets the port number for the profiler endpoint.profiler: enabled: true port: 6060# The authn section specifies the authentication method for the service.authn: enabled: true method: preshared preshared: keys: [ ]# The tracer section enables or disables distributed tracing and sets the# exporter and endpoint for the tracing data.tracer: exporter: zipkin endpoint: http://localhost:9411/api/v2/spans enabled: true# The meter section enables or disables metrics collection and sets the# exporter and endpoint for the collected metrics.meter: exporter: otlp endpoint: localhost:4318 enabled: true# The service section sets various service-level settings, including whether# or not to use a circuit breaker, and cache sizes for schema, permission,# and relationship data.service: circuit_breaker: false watch: enabled: false schema: cache: number_of_counters: 1_000 max_cost: 10MiB permission: bulk_limit: 100 concurrency_limit: 100 cache: number_of_counters: 10_000 max_cost: 10MiB# The database section specifies the database engine and connection settings,# including the URI for the database, whether or not to auto-migrate the database,# and connection pool settings.database: engine: postgres uri: postgres://user:password@host:5432/db_name auto_migrate: false max_connections: 20 # Maximum number of connections in the pool (maps to pgxpool MaxConns) max_open_connections: 20 # Deprecated: use max_connections instead. Kept for backward compatibility. max_idle_connections: 1 # Deprecated: use min_connections instead. Kept for backward compatibility (maps to MinConnections if min_connections is not set). min_connections: 0 # Minimum number of connections in the pool (maps to pgxpool MinConns). If 0 and max_idle_connections is set, max_idle_connections will be used. min_idle_connections: 0 # Minimum idle connections (maps to pgxpool MinIdleConns). Must be explicitly set if needed (not set in old code). max_connection_lifetime: 300s max_connection_idle_time: 60s health_check_period: 0s # Use pgxpool default (1 minute) if 0 max_connection_lifetime_jitter: 0s # Will default to 20% of max_connection_lifetime if 0 connect_timeout: 0s # Use pgx default (no timeout) if 0 garbage_collection: enabled: true interval: 200h window: 200h timeout: 5m# distributed configuration settingsdistributed: # Indicates whether the distributed mode is enabled or not enabled: true # The address of the distributed service. # Using a Kubernetes DNS name suggests this service runs in a Kubernetes cluster # under the 'default' namespace and is named 'permify' address: "kubernetes:///permify.default" # The port on which the service is exposed port: "5000"
Permify supports OpenID Connect (OIDC). OIDC provides an identity layer on top of OAuth 2.0 to address the shortcomings
of using OAuth 2.0 for establishing identity.With this authentication method, you be able to integrate your existing Identity Provider (IDP) to validate JSON Web
Tokens (JWTs) using JSON Web Keys (JWKs). By doing so, only trusted tokens from the IDP will be accepted for
authentication.
Authentication method can be either oidc or preshared.
[ ]
enabled
false
Switch option to enable or disable authentication config.
[x]
audience
-
The audience identifies the intended recipients of the token, typically the API or resource server. It ensures tokens are used only by the authorized party.
[x]
issuer
-
This is the URL of the provider that is responsible for authenticating users. You will use this URL to discover information about the provider in step 1 of the authentication process.
[x]
refresh_interval
15m
The interval at which the authentication information should be refreshed to ensure that it remains valid and up-to-date.
[x]
backoff_interval
12s
The delay between retries when attempting to authenticate if the key is not found. The system will retry at intervals, which may vary, to avoid constant retry attempts.
[x]
backoff_frequency
-
The duration to wait before retrying after a failed authentication attempt. This helps to manage the load on the authentication service by introducing a delay between retries, ensuring that repeated failures do not overwhelm the service or lead to excessive requests. This value should be configured according to the expected response times and reliability of the authentication provider.
[x]
backoff_max_retries
5
The maximum number of retry attempts to make if key is not found.
[x]
valid_methods
[“RS256”,“HS256”]
A list of accepted signing methods for tokens. This ensures that only tokens signed using one of the specified algorithms will be considered valid.
Configurations for the database that points out where your want to store your authorization data (relation tuples,
audits, decision logs, authorization model)
Data source. Permify supports PostgreSQL('postgres') for now. Contact with us for your preferred database.
[x]
uri
-
Uri of your data source.
[ ]
writer.uri
-
Writer uri of your data source. If not set, uses uri.
[ ]
reader.uri
-
Reader uri of your data source. If not set, uses uri.
[ ]
auto_migrate
true
When its configured as false migrating flow won’t work.
[ ]
max_connections
0
Maximum number of connections in the pool (maps to pgxpool MaxConns). 0 means unlimited (pgx default). If not set, max_open_connections will be used for backward compatibility.
[ ]
max_open_connections (deprecated)
20
Deprecated: use max_connections instead. Kept for backward compatibility.
[ ]
max_idle_connections (deprecated)
1
Deprecated: use min_connections instead. Kept for backward compatibility (maps to MinConnections if min_connections is not set).
[ ]
min_connections
0
Minimum number of connections in the pool (maps to pgxpool MinConns). If 0 and max_idle_connections is set, max_idle_connections will be used.
[ ]
min_idle_connections
0
Minimum number of idle connections in the pool (maps to pgxpool MinIdleConns). Must be explicitly set if needed (not set in old code).
[ ]
max_connection_lifetime
300s
Determines the maximum lifetime of a connection in seconds.
[ ]
max_connection_idle_time
60s
Determines the maximum time in seconds that a connection can remain idle before it is closed.
[ ]
health_check_period
0s
Period between health checks on idle connections. 0 means use pgxpool default (1 minute).
[ ]
max_connection_lifetime_jitter
0s
Jitter added to MaxConnLifetime to prevent all connections from expiring at once. 0 means default to 20% of max_connection_lifetime.
[ ]
connect_timeout
0s
Maximum time to wait when establishing a new connection. 0 means use pgx default (no timeout).
[ ]
max_data_per_write
1000
Sets the maximum amount of data per write operation to the database.
[ ]
max_retries
10
Defines the maximum number of retries for database operations in case of failure.
[ ]
watch_buffer_size
100
Specifies the buffer size for database watch operations, impacting how many changes can be queued.
[ ]
enable (for garbage collection)
false
Switch option for garbage collection.
[ ]
interval
3m
Determines the run period of a Garbage Collection operation.
[ ]
timeout
3m
Sets the duration of the Garbage Collection timeout.
[ ]
window
720h
Determines how much backward cleaning the Garbage Collection process will perform.
Production Best Practices: Connection Pooling with pgcat
For production deployments, especially when running multiple Permify instances, we strongly recommend using pgcat (PostgreSQL Connection Pooler) for server-side connection pooling. This helps manage database connections efficiently and prevents connection exhaustion.For detailed information about pgcat setup, configuration, and best practices, see the Database Pooling with Pgcat guide.Why use pgcat?
Connection Management: pgcat manages a pool of connections to PostgreSQL, allowing multiple Permify instances to share connections efficiently
Performance: Reduces connection overhead and improves query performance
Configuration ExampleWhen using pgcat with session mode, configure Permify to connect through pgcat instead of directly to PostgreSQL:
Configurations for the permify service and how it should behave. You can configure the circuit breaker pattern,
configuration watcher, and service specific options for permission and schema services (rate limiting, concurrency
limiting, cache size).
pprof is a performance profiler for Go programs. It allows developers to analyze and understand the performance
characteristics of their code by generating detailed profiles of program executionWhen enabled, Permify exposes Go’s standard pprof HTTP endpoints on the configured port (default 6060):
Endpoint
Purpose
GET /debug/pprof/profile?seconds=30
CPU profile — shows which functions are consuming CPU cycles over the sampling window
GET /debug/pprof/trace?seconds=5
Execution trace — records goroutine scheduling, GC, and syscall events
List all goroutines and their stack traces — useful for detecting goroutine leaks
GET /debug/pprof/
Index of all available profiles
You can analyse captured profiles locally using Go’s built-in tooling:
# Download and open a 30-second CPU profile in the interactive UIgo tool pprof http://localhost:6060/debug/pprof/profile?seconds=30
When to enable the profiler:
Investigating a CPU spike or unexpectedly high latency in Permify.
During load or capacity testing to identify bottlenecks before production.
When suspecting a performance regression after a version upgrade.
Best practice: Enable the profiler temporarily when needed, then disable it again. Keeping it permanently open in production is not recommended — it exposes an unauthenticated HTTP endpoint and adds a small constant overhead.
The pprof endpoint has no built-in authentication. Restrict network access to it (e.g. via a sidecar, internal network policy, or firewall rule) so it is not reachable from the public internet.
A consistent hashing ring ensures data distribution that minimizes reorganization when nodes are added or removed,
improving scalability and performance in distributed systems.”