ADR Viewer - kalium

Decision record template by Michael Nygard

This is the template in Documenting architecture decisions - Michael Nygard. You can use adr-tools for managing the ADR files.

In each ADR file, write these sections:

Title

Status

What is the status, such as proposed, accepted, rejected, deprecated, superseded, etc.?

Context

What is the issue that we're seeing that is motivating this decision or change?

Decision

What is the change that we're proposing and/or doing?

Consequences

What becomes easier or more difficult to do because of this change?

1. Record architecture decisions

Date: 2025-12-03

Status

Accepted

Context

We need to document architectural decisions made during the development of the project in Kalium. This will help us keep track of the reasoning behind our decisions and provide context for future developers working on the project.

Decision

We will use Architecture Decision Records in the code and as part of the review process. We will use the Lightway ADR template to keep the ADRs simple and easy to maintain.

Consequences

  • We need to add a new folder to the repository, docs/adr, to keep the architecture decision records.
  • Whenever a significant architecture decision is made, we need to create a new ADR file in that folder.
  • We need to review the ADRs periodically to ensure they are still relevant and up-to-date.
  • This will help us maintain a clear record of our architecture decisions and the reasoning behind them, which will be useful for future reference and onboarding new team members.

2. Consolidate System Message Content Tables

Date: 2025-11-18

Status

Accepted

Context

The Kalium persistence layer previously used 10 separate tables to store different types of system message content:

  1. MessageMemberChangeContent
  2. MessageFailedToDecryptContent
  3. MessageConversationChangedContent
  4. MessageNewConversationReceiptModeContent
  5. MessageConversationReceiptModeChangedContent
  6. MessageConversationTimerChangedContent
  7. MessageFederationTerminatedContent
  8. MessageConversationProtocolChangedContent
  9. MessageLegalHoldContent
  10. MessageConversationAppsEnabledChangedContent

This design led to several issues:

  • Query complexity: The MessageDetailsView required a ton LEFT JOINs to fetch message data, causing performance overhead
  • Schema rigidity: Adding new system message types required schema migrations, adding new tables, and updating the view
  • Unbounded view growth: Every new system message type required adding another LEFT JOIN to MessageDetailsView, making the view progressively larger and slower with each addition
  • Maintenance burden: Each new message type needed its own table definition, insert queries, update queries, and view integration
  • Query performance: Multiple JOINs across 10+ tables for every message query degraded performance, especially for large message lists

Decision

We consolidated all 10 system message content tables into a single MessageSystemContent table using a discriminator pattern with generic typed columns.

New Schema (Migration 120)

CREATE TABLE MessageSystemContent (
    message_id TEXT NOT NULL,
    conversation_id TEXT NOT NULL,
    content_type TEXT NOT NULL,  -- Discriminator: 'MEMBER_CHANGE', 'FAILED_DECRYPT', etc.

    -- Generic typed fields for flexible storage
    text_1 TEXT,        -- conversation_name, protocol, etc.
    integer_1 INTEGER,  -- message_timer, error_code, etc.
    boolean_1 INTEGER,  -- receipt_mode, is_apps_enabled, is_decryption_resolved
    list_1 TEXT,        -- member lists, domain lists
    enum_1 TEXT,        -- member_change_type, federation_type, legal_hold_type
    blob_1 BLOB,        -- unknown_encoded_data for failed decryption

    FOREIGN KEY (message_id, conversation_id) REFERENCES Message(id, conversation_id),
    PRIMARY KEY (message_id, conversation_id)
);

CREATE INDEX idx_system_content_type ON MessageSystemContent(content_type);

Field Mapping Pattern

Each content type maps its specific fields to the generic columns:

| Content Type | Field Mapping | |--------------|---------------| | MEMBER_CHANGE | list_1=members, enum_1=type | | FAILED_DECRYPT | blob_1=data, boolean_1=resolved, integer_1=error_code | | CONVERSATION_RENAMED | text_1=name | | RECEIPT_MODE | boolean_1=enabled | | TIMER_CHANGED | integer_1=duration_ms | | FEDERATION_TERMINATED | list_1=domains, enum_1=type | | PROTOCOL_CHANGED | text_1=protocol | | LEGAL_HOLD | list_1=members, enum_1=type | | APPS_ENABLED | boolean_1=enabled |

View Layer Abstraction

The MessageDetailsView uses CASE statements to maintain semantic column names, ensuring no breaking changes to application code:

CASE WHEN SystemContent.content_type = 'MEMBER_CHANGE'
     THEN SystemContent.list_1 END AS memberChangeList,
CASE WHEN SystemContent.content_type = 'MEMBER_CHANGE'
     THEN SystemContent.enum_1 END AS memberChangeType,

This allows existing Kotlin code to continue working without modifications:

val message = messagesQueries.selectById(messageId, conversationId).executeAsOne()
val members = message.memberChangeList  // Still works!
val changeType = message.memberChangeType  // Still works!

Migration Strategy

The migration (120.sqm) performs these steps:

  1. Creates the new MessageSystemContent table
  2. Migrates all data from the 10 old tables with proper field mapping
  3. Drops the old tables
  4. Recreates views to use the new consolidated table

Consequences

Positive

  • Query performance: 20-40% improvement expected due to JOIN reduction (12+ JOINs → 2 JOINs)
  • Schema flexibility: New system message types can be added without schema migrations
  • Simplified maintenance: Single table to manage instead of 10 separate tables
  • Index efficiency: Single composite primary key and one content_type index instead of 10 primary keys
  • Code compatibility: No breaking changes to application code thanks to view abstraction
  • Testing framework: Established SchemaMigrationTest pattern for future migrations

Negative

  • Reduced type safety: Generic column names (text_1, integer_1) are less self-documenting than specific columns (conversation_name, message_timer)
  • Documentation dependency: Developers must consult field mapping documentation to understand which generic column maps to which semantic field
  • NULL overhead: Each row contains 6 generic columns, most of which will be NULL for any given message type
  • Database-level validation: Cannot enforce NOT NULL constraints on specific fields per content type

Mitigation Strategies

  1. Documentation: Comprehensive field mapping documentation for all content types
  2. Convenience queries: Type-specific insert queries (e.g., insertSystemMemberChange) hide generic field names from developers
  3. View abstraction: MessageDetailsView provides semantic column names, maintaining code readability
  4. Migration guide: Detailed guide for updating Kotlin code

Usage Guidelines

Adding New System Message Types

When adding a new system message type:

  1. Choose a content_type discriminator value (e.g., 'NEW_MESSAGE_TYPE')
  2. Map semantic fields to generic columns
  3. Add convenience insert query in Messages.sq
  4. Update MessageDetailsView with CASE statements for the new type (requires migration)
  5. Update MessageInsertExtensionImpl.kt to use the new query

Querying System Messages

Always query through MessageDetailsView to get semantic column names:

// Good - uses view with semantic names
val message = messagesQueries.selectById(messageId, conversationId).executeAsOne()
val name = message.conversationName

// Avoid - direct table access with generic names
// This makes code harder to understand

Related Documentation

  • Migration SQL: persistence/src/commonMain/db_user/migrations/120.sqm

3. Database Migration Testing Framework

Date: 2025-11-18

Status

Accepted

Context

Database migrations in Kalium are critical operations that modify both schema structure and transform existing user data. Previously, migration testing was inconsistent and ad-hoc, with most migrations only tested through integration tests that ran the entire migration chain. This approach had several problems:

  • Insufficient coverage: Migrations were not tested individually with realistic data scenarios
  • Late failure detection: Migration bugs were only discovered after the migration ran on production databases
  • Data loss risk: No systematic verification that all data was correctly transformed during migrations
  • Debugging difficulty: When a migration failed, it was hard to pinpoint which specific data transformation was incorrect
  • No regression testing: Once a migration was released, we had no automated way to verify it continued working correctly

The consolidation of 10 system message content tables into a single table (Migration 120) highlighted the need for a robust testing framework that could:

  1. Load pre-migration database schemas from actual schema files
  2. Insert realistic test data into old schema structures
  3. Execute migration SQL in isolation
  4. Verify data integrity and completeness after migration
  5. Test edge cases like NULL values, complex data types, and large datasets

Decision

We created the SchemaMigrationTest framework, a comprehensive testing infrastructure for database migrations that involve both schema changes and data transformations.

Architecture

The framework consists of three main components:

1. Base Test Class: SchemaMigrationTest.kt

A base class that provides:

abstract class SchemaMigrationTest {
    protected fun runMigrationTest(
        schemaVersion: Int,
        setupOldSchema: (SqlDriver) -> Unit,
        migrationSql: () -> String,
        verifyNewSchema: (SqlDriver) -> Unit
    )
}

Key capabilities:

  • Loads pre-migration schema from .db files stored in src/commonTest/kotlin/com/wire/kalium/persistence/schemas/
  • Creates temporary test databases using JDBC SQLite driver
  • Executes multi-statement migration SQL
  • Provides helper methods for common database operations

Helper methods:

  • executeInsert(sql) - Insert data with simple SQL
  • countRows(tableName) - Get row count from a table
  • querySingleValue(sql, mapper) - Query and extract a single value
  • executeQuery(sql, mapper) - General-purpose query execution

2. Schema Files

Pre-migration database schemas stored as SQLite .db files:

persistence/src/commonTest/kotlin/com/wire/kalium/persistence/schemas/
├── 124.db
└── ...

These files are snapshots of the actual schema before a migration runs, generated using:

./gradlew :persistence:generateCommonMainUserDatabaseInterface
cp persistence/src/commonMain/db_user/schemas/124.db \
   persistence/src/commonTest/kotlin/com/wire/kalium/persistence/schemas/124.db

3. Migration Tests

Individual test classes that extend SchemaMigrationTest and test specific migrations:

class Migration120Test : SchemaMigrationTest() {
    @Test
    fun testMemberChangeContentMigration() = runTest(dispatcher) {
        runMigrationTest(
            schemaVersion = 120,
            setupOldSchema = { driver ->
                // Insert test data into old schema
                driver.executeInsert("""
                    INSERT INTO MessageMemberChangeContent
                    (message_id, conversation_id, member_change_list, member_change_type)
                    VALUES ('msg-1', 'conv-1', '["user1", "user2"]', 'ADDED')
                """)
            },
            migrationSql = {
                // Read actual migration SQL from file
                File("src/commonMain/db_user/migrations/120.sqm").readText()
            },
            verifyNewSchema = { driver ->
                // Verify the migrated data
                val count = driver.countRows("MessageSystemContent")
                assertEquals(1, count)

                var contentType: String? = null
                var list1: String? = null

                driver.executeQuery(null, """
                    SELECT content_type, list_1
                    FROM MessageSystemContent
                    WHERE message_id = 'msg-1'
                """.trimIndent(), { cursor ->
                    if (cursor.next().value) {
                        contentType = cursor.getString(0)
                        list1 = cursor.getString(1)
                    }
                    QueryResult.Unit
                }, 0)

                assertEquals("MEMBER_CHANGE", contentType)
                assertEquals("""["user1", "user2"]""", list1)

                // Verify old table was dropped
                assertTableDoesNotExist(driver, "MessageMemberChangeContent")
            }
        )
    }
}

Testing Strategy

We categorize migrations into two types, each with different testing approaches:

Type 1: Schema-only Migrations

Migrations that only modify database structure without transforming existing data.

Example: Adding a new column with a default value

Testing approach: Use simpler unit tests (like EventMigration109Test style)

When to use:

  • Adding new tables
  • Adding columns with default values
  • Creating indexes
  • Adding constraints

Type 2: Schema + Content Migrations

Migrations that both modify database structure AND transform existing data.

Example: Consolidating multiple tables into one (Migration 120)

Testing approach: Use SchemaMigrationTest framework (this ADR)

When to use:

  • Consolidating tables
  • Transforming data formats
  • Moving data between tables
  • Complex multi-step migrations

Test Coverage Requirements

For Schema + Content migrations, each test class should include:

  1. Individual content type tests: One test per data type being migrated
  2. Data integrity tests: Verify ALL fields are correctly migrated, including NULL values
  3. Multiple messages test: Test migrating several different message types simultaneously
  4. Index creation tests: Verify indexes are created correctly
  5. Old table cleanup tests: Verify old tables are dropped
  6. Edge case tests: Empty strings, NULL values, special characters, large datasets

Example from Migration120Test:

  • 11 individual content type tests
  • 6 comprehensive data integrity tests
  • 1 multi-message migration test
  • 1 index creation test
  • Total: 19+ test cases

Working with Different Data Types

The framework supports all SQLite data types:

TEXT:

driver.executeInsert("""
    INSERT INTO Table (id, name)
    VALUES ('id-1', 'Test Name')
""")

INTEGER (including booleans as 0/1):

driver.executeInsert("""
    INSERT INTO Table (id, is_enabled)
    VALUES ('id-1', 1)  -- true
""")

BLOB:

val testData = byteArrayOf(0x01, 0x02, 0x03)
driver.execute(null, """
    INSERT INTO Table (id, data) VALUES (?, ?)
""".trimIndent(), 2) {
    bindString(0, "id-1")
    bindBytes(1, testData)
}

Reading BLOB data:

var blob: ByteArray? = null
driver.executeQuery(null, "SELECT data FROM Table WHERE id = 'id-1'", { cursor ->
    if (cursor.next().value == true) {
        blob = cursor.getBytes(0)
    }
}, 0)

Consequences

Positive

  • Early bug detection: Migration bugs are caught during development, not in production
  • Data integrity guarantee: Systematic verification ensures no data loss during migrations
  • Regression prevention: Migrations remain tested even after release
  • Realistic testing: Uses actual schema files and migration SQL, not mocks
  • Fast feedback: Tests run in milliseconds using in-memory SQLite databases
  • Documentation: Tests serve as executable documentation of migration behavior
  • Confidence: Developers can refactor migrations knowing tests will catch breaks
  • Reusable infrastructure: Base class and helpers make writing new tests straightforward

Negative

  • Manual schema management: Developers must remember to copy schema files before running migrations
  • Storage overhead: Schema .db files must be committed to the repository (typically 50-100 KB each)
  • JVM-only: Tests only run on JVM target, not on Android or iOS
  • Maintenance burden: Each migration requires significant test code (Migration120Test is ~1000 lines)
  • Slower CI builds: More comprehensive tests increase CI build time
  • Learning curve: Developers need to understand SQLite JDBC API and framework patterns

Mitigation Strategies

  1. Helper methods: Base class provides common operations to reduce boilerplate
  2. Template tests: Migration120Test serves as a template for future migration tests
  3. CI integration: Automated checks verify schema files exist before allowing PR merge
  4. Code review checklist: Ensure all migration PRs include corresponding tests

Best Practices

When writing migration tests:

  1. Export schema files BEFORE running the migration

    ./gradlew :persistence:generateCommonMainUserDatabaseInterface
    cp persistence/src/commonMain/db_user/schemas/124.db \
       persistence/src/commonTest/kotlin/com/wire/kalium/persistence/schemas/124.db
    
  2. Read migration SQL from actual .sqm files

    private fun getMigration120Sql(): String {
        return File("src/commonMain/db_user/migrations/120.sqm").readText()
    }
    

    This ensures tests always use the real migration SQL

  3. Test each content type separately

    @Test fun testMemberChangeContentMigration()
    @Test fun testFailedToDecryptContentMigration()
    @Test fun testConversationRenamedContentMigration()
    
  4. Verify ALL fields, not just the happy path

    // Verify used fields have correct values
    assertEquals("MEMBER_CHANGE", contentType)
    assertEquals("""["user1", "user2"]""", list1)
    
    // Verify unused fields are NULL
    assertNull(text1)
    assertNull(integer1)
    assertNull(blob1)
    
  5. Always verify old tables are dropped

    assertTableDoesNotExist(driver, "MessageMemberChangeContent")
    
  6. Test with realistic data

    • Use actual user IDs, conversation IDs
    • Include special characters, emojis
    • Test edge cases: empty strings, NULL values, large datasets
  7. Use descriptive test names

    @Test fun testMemberChangeDataIntegrity_AllFieldsPreserved()
    @Test fun testAllSystemMessageTypesNoDataLoss()
    

Running the Tests

# Run all migration tests for migration 120
./gradlew :persistence:jvmTest --tests "*Migration120Test"

# Run a specific test
./gradlew :persistence:jvmTest --tests "*Migration120Test.testMemberChangeContentMigration"

# Run all schema migration tests
./gradlew :persistence:jvmTest --tests "*migration*"

Example: Creating a New Migration Test

Step 1: Export the schema file BEFORE applying the migration

./gradlew :persistence:generateCommonMainUserDatabaseInterface
cp persistence/src/commonMain/db_user/schemas/121.db \
   persistence/src/commonTest/kotlin/com/wire/kalium/persistence/schemas/121.db

Step 2: Create the test class

package com.wire.kalium.persistence.dao.migration

import kotlinx.coroutines.test.runTest
import kotlin.test.Test
import kotlin.test.assertEquals

class Migration121Test : SchemaMigrationTest() {

    companion object {
        private const val MESSAGE_ID = "test-message-id"
        private const val CONVERSATION_ID = "test-conversation-id"
        private const val MIGRATION_NAME = 121
    }

    private fun getMigration121Sql(): String {
        val migrationFile = File("src/commonMain/db_user/migrations/$MIGRATION_NAME.sqm")
        if (!migrationFile.exists()) {
            error("Migration file not found: ${migrationFile.absolutePath}")
        }
        return migrationFile.readText()
    }

    @Test
    fun testMyDataMigration() = runTest(dispatcher) {
        runMigrationTest(
            schemaVersion = MIGRATION_NAME,
            setupOldSchema = { driver ->
                // Insert test data into old schema
                driver.executeInsert("""
                    INSERT INTO OldTable (id, value)
                    VALUES ('$MESSAGE_ID', 'test-value')
                """)
            },
            migrationSql = { getMigration121Sql() },
            verifyNewSchema = { driver ->
                // Verify the migrated data
                val count = driver.countRows("NewTable")
                assertEquals(1, count)

                var migratedValue: String? = null
                driver.executeQuery(null, """
                    SELECT value FROM NewTable WHERE id = '$MESSAGE_ID'
                """.trimIndent(), { cursor ->
                    if (cursor.next().value) {
                        migratedValue = cursor.getString(0)
                    }
                    QueryResult.Unit
                }, 0)

                assertEquals("test-value", migratedValue)

                // Verify old table was dropped
                assertTableDoesNotExist(driver, "OldTable")
            }
        )
    }
}

Step 3: Run the test

./gradlew :persistence:jvmTest --tests "*Migration121Test"

Complete Example: Migration 120 - System Message Consolidation

Migration 120 consolidated 10 separate system message content tables into a single MessageSystemContent table. The test suite demonstrates comprehensive coverage with the following test types:

Basic Migration Tests (11 tests)

Each old table gets its own test to verify correct migration:

@Test
fun testMemberChangeContentMigration() = runTest(dispatcher) {
    runMigrationTest(
        schemaVersion = 120,
        setupOldSchema = { driver ->
            driver.executeInsert("""
                INSERT INTO MessageMemberChangeContent
                (message_id, conversation_id, member_change_list, member_change_type)
                VALUES ('$MESSAGE_ID', '$CONVERSATION_ID', '["user1", "user2"]', 'ADDED')
            """)
        },
        migrationSql = { getMigration120Sql() },
        verifyNewSchema = { driver ->
            val count = driver.countRows("MessageSystemContent")
            assertEquals(1, count)

            var contentType: String? = null
            var list1: String? = null
            var enum1: String? = null

            driver.executeQuery(null, """
                SELECT content_type, list_1, enum_1
                FROM MessageSystemContent
                WHERE message_id = '$MESSAGE_ID'
            """.trimIndent(), { cursor ->
                if (cursor.next().value) {
                    contentType = cursor.getString(0)
                    list1 = cursor.getString(1)
                    enum1 = cursor.getString(2)
                }
                QueryResult.Unit
            }, 0)

            assertEquals("MEMBER_CHANGE", contentType)
            assertEquals("""["user1", "user2"]""", list1)
            assertEquals("ADDED", enum1)

            assertTableDoesNotExist(driver, "MessageMemberChangeContent")
        }
    )
}

Similar tests cover:

  • testFailedToDecryptContentMigration - Tests BLOB and error code migration
  • testConversationRenamedContentMigration - Tests text field migration
  • testReceiptModeContentMigration - Tests boolean field migration from two separate tables
  • testTimerChangedContentMigration - Tests integer field migration
  • testFederationTerminatedContentMigration - Tests list and enum migration
  • testProtocolChangedContentMigration - Tests enum-only migration
  • testLegalHoldContentMigration - Tests complex list with enums
  • testAppsEnabledChangedContentMigration - Tests boolean field migration
  • testMultipleSystemMessagesMigration - Tests migrating all types together
  • testIndexCreation - Verifies indexes are created correctly

Data Integrity Tests (6 tests)

Verify ALL fields are correctly migrated, including NULL values for unused fields:

@Test
fun testMemberChangeDataIntegrity_AllFieldsPreserved() = runTest(dispatcher) {
    runMigrationTest(
        schemaVersion = 120,
        setupOldSchema = { driver ->
            driver.executeInsert("""
                INSERT INTO MessageMemberChangeContent
                (message_id, conversation_id, member_change_list, member_change_type)
                VALUES ('$MESSAGE_ID', '$CONVERSATION_ID',
                        '["user1", "user2"]', 'FEDERATION_REMOVED')
            """)
        },
        migrationSql = { getMigration120Sql() },
        verifyNewSchema = { driver ->
            var messageId: String? = null
            var conversationId: String? = null
            var contentType: String? = null
            var text1: String? = null
            var integer1: Long? = null
            var boolean1: Long? = null
            var list1: String? = null
            var enum1: String? = null
            var blob1: ByteArray? = null

            driver.executeQuery(null, """
                SELECT message_id, conversation_id, content_type,
                       text_1, integer_1, boolean_1, list_1, enum_1, blob_1
                FROM MessageSystemContent
                WHERE message_id = '$MESSAGE_ID'
            """.trimIndent(), { cursor ->
                if (cursor.next().value) {
                    messageId = cursor.getString(0)
                    conversationId = cursor.getString(1)
                    contentType = cursor.getString(2)
                    text1 = cursor.getString(3)
                    integer1 = cursor.getLong(4)
                    boolean1 = cursor.getLong(5)
                    list1 = cursor.getString(6)
                    enum1 = cursor.getString(7)
                    blob1 = cursor.getBytes(8)
                }
                QueryResult.Unit
            }, 0)

            // Verify PKs
            assertEquals(MESSAGE_ID, messageId)
            assertEquals(CONVERSATION_ID, conversationId)

            // Verify content type
            assertEquals("MEMBER_CHANGE", contentType)

            // Verify used fields
            assertEquals("""["user1", "user2"]""", list1)
            assertEquals("FEDERATION_REMOVED", enum1)

            // Verify unused fields are NULL
            assertNull(text1)
            assertNull(integer1)
            assertNull(boolean1)
            assertNull(blob1)
        }
    )
}

Similar comprehensive tests for:

  • testFailedDecryptDataIntegrity_AllFieldsPreserved
  • testConversationRenamedDataIntegrity_AllFieldsPreserved
  • testAllReceiptModeTypesDataIntegrity
  • testFederationTerminatedDataIntegrity_ComplexListPreserved
  • testLegalHoldDataIntegrity_ComplexMemberList
  • testAllSystemMessageTypesNoDataLoss - Tests all 10 types together with full field verification

Total: 19+ test cases covering every migration scenario, edge case, and data integrity requirement.

Troubleshooting Common Issues

Schema file not found

Error: Schema file not found: /com/wire/kalium/persistence/schemas/119.db

Solution: Make sure the schema file exists at: persistence/src/commonTest/kotlin/com/wire/kalium/persistence/schemas/124.db

Generate it with:

./gradlew :persistence:generateCommonMainUserDatabaseInterface
cp persistence/src/commonMain/db_user/schemas/124.db \
   persistence/src/commonTest/kotlin/com/wire/kalium/persistence/schemas/124.db

Migration SQL fails

Error: Failed to execute migration statement: ...

Solution:

  1. Check that your migration SQL is valid SQLite
  2. Ensure the old tables exist in the schema file
  3. Verify foreign key constraints are satisfied
  4. Test each SQL statement individually

Table already exists

Error: table MessageSystemContent already exists

Solution: Make sure your migration SQL uses CREATE TABLE IF NOT EXISTS or the schema file is from before the migration. The schema file should be generated BEFORE you write the migration SQL.

BLOB data not migrating correctly

// Wrong - string binding for BLOB
driver.execute(null, "INSERT INTO Table (data) VALUES (?)", 1) {
    bindString(0, testData.toString()) // Wrong!
}

// Correct - bytes binding for BLOB
driver.execute(null, "INSERT INTO Table (data) VALUES (?)", 1) {
    bindBytes(0, testData) // Correct!
}

Boolean values not working

SQLite doesn't have a native boolean type. Use INTEGER with 0/1:

// Insert
driver.executeInsert("INSERT INTO Table (is_enabled) VALUES (1)")  // true

// Query - returns Long, not Boolean
val isEnabled: Long? = cursor.getLong(0)  // 0 or 1
assertEquals(1L, isEnabled)  // true

Future Improvements

Potential enhancements to the framework:

  1. Automatic schema file management: Git hooks or Gradle tasks to automatically copy schema files
  2. Multi-platform support: Extend tests to run on Android and iOS targets
  3. Performance testing: Measure migration execution time with large datasets
  4. Data generation helpers: Factory methods for creating realistic test data
  5. Migration comparison: Tools to diff schemas before/after migration
  6. Visual regression: Generate schema diagrams before/after migration for documentation

Related Files

  • Example Test: persistence/src/jvmTest/kotlin/com/wire/kalium/persistence/dao/migration/Migration120Test.kt - Reference implementation with 19+ test cases
  • Base Class: persistence/src/jvmTest/kotlin/com/wire/kalium/persistence/dao/migration/SchemaMigrationTest.kt - Framework implementation
  • Related ADR: ADR 0002 - Consolidate System Message Content Tables
  • Migration File: persistence/src/commonMain/db_user/migrations/124.sqm - Actual migration SQL

4. Module Boundary Restructuring

Date: 2025-11-27

Status

Accepted

Context

The Kalium codebase had grown organically with modules at the root level (e.g., :common, :data, :persistence, :network, :backup, :calling, :cells). This flat structure made it difficult to understand module relationships, enforce architectural boundaries, and maintain clear separation of concerns. As the codebase scaled, we needed a more organized module hierarchy that would:

  • Clearly define architectural layers and their responsibilities
  • Make module dependencies more explicit and easier to reason about
  • Improve discoverability by grouping related modules together
  • Enforce better separation between core infrastructure, data access, domain logic, and testing utilities

Decision

We reorganized modules into a hierarchical structure with clear architectural boundaries:

Core Layer (:core:*) - Foundation modules:

  • :core:common (was :common)
  • :core:data (was :data)
  • :core:cryptography (was :cryptography)
  • :core:logger (was :logger)
  • :core:util (was :util)

Data Layer (:data:*) - Data access and infrastructure:

  • :data:network (was :network)
  • :data:network-model (was :network-model)
  • :data:network-util (was :network-util)
  • :data:persistence (was :persistence)
  • :data:persistence-test (was :persistence-test)
  • :data:protobuf (was :protobuf)
  • :data:data-mappers (new module for transformations)

Domain Layer (:domain:*) - Business logic boundaries:

  • :domain:backup (was :backup)
  • :domain:calling (was :calling)
  • :domain:cells (was :cells)
  • :domain:conversation-history (new)
  • :domain:messaging:sending (new, extracted from logic)
  • :domain:messaging:receiving (new, extracted from logic)

Test Layer (:test:*) - Testing utilities:

  • :test:mocks (was :mocks)
  • :test:data-mocks (new)
  • :test:benchmarks (was :benchmarks)
  • :test:tango-tests (was :tango-tests)

Sample/Tools Layer (:sample:*, :tools:*):

  • :sample:cli (was :cli)
  • :sample:samples (was :samples)
  • :tools:testservice (was :testservice)
  • :tools:monkeys (was :monkeys)
  • :tools:backup-verification (new)
  • :tools:protobuf-codegen (was :protobuf-codegen)

All module references were updated throughout the codebase including build files, CI workflows, documentation, and the dependency graph visualization.

Consequences

Benefits:

  • Clearer Architecture: The layer-based structure makes the architecture immediately visible from the project structure
  • Better Discoverability: Developers can quickly locate modules by their architectural purpose
  • Enforced Boundaries: The naming scheme makes it obvious when a module is reaching across layers inappropriately
  • Improved Documentation: Module paths now self-document their architectural role (e.g., :data:network vs just :network)
  • Scalability: New modules can be added to appropriate layers without cluttering the root
  • Easier Onboarding: New team members can understand the system organization more quickly

Trade-offs:

  • Migration Effort: All module references needed updating across build files, CI workflows, and documentation
  • Longer Module Paths: Module references are now more verbose (e.g., projects.core.common instead of projects.common)
  • Breaking Change: External consumers referencing modules directly will need to update their references

Technical Changes:

  • Updated all implementation(projects.*) references in build files
  • Updated CI workflow gradle task paths (e.g., :cli:assemble:sample:cli:assemble)
  • Updated module graph configuration to show full paths and nest by module type
  • Updated detekt baseline and project structure documentation
  • Added README files to layer directories explaining their purpose and guidelines

5. Lightweight COUNT for Faster Conversation List Loading

Date: 2025-12-03

Status

Accepted

Context

On Android clients with a large number of conversations and heavy message history, the conversation list was slow to load.
Profiling showed a significant delay during the COUNT phase executed before the Paging source loads items.

The existing COUNT query used the ConversationDetails view, which performs many joins, user metadata checks, visibility rules, and sorting-related logic.
This resulted in:

  • creation of temp B-trees
  • expensive scans over large joined structures
  • noticeable TTI spikes on older devices and devices with large databases

Most of this logic is needed for the conversation list itself, but not for the COUNT used by the Paging library. Paging only requires an upper bound, not the exact filtered set.

Decision

Use a lightweight COUNT that operates directly on the Conversation table, but only when:

  • searchQuery is empty (no text search filtering required)
  • no metadata-dependent visibility logic is needed for COUNT

The new countConversations query is equivalent to the core filters used by the real paging SELECT and matches:

  • exclude conversations of type SELF
  • apply conversationFilter (ALL, GROUPS, ONE_ON_ONE, CHANNELS)
  • apply archived
  • apply deleted_locally
  • apply protocol and MLS state (strict MLS, ESTABLISHED, PENDING_AFTER_RESET)

However, it intentionally does not perform the additional visibility logic from ConversationDetails, such as:

  • 1:1 metadata checks (missing name, missing otherUserId)
  • userDeleted or defederated logic
  • CONNECTION_PENDING visibility rules
  • isActive calculation from Member/User tables
  • interactionEnabled filtering
  • favorites and folder membership filtering

These rules remain enforced by the actual SELECT used for loading pages.

As a result, the lightweight COUNT may return a superset of rows, but this is acceptable because the Paging source uses the full SELECT query for real data.
The COUNT only provides a high-level upper bound.

Consequences

Benefits

  • Significant reduction of TTI spikes on conversation list screen
  • COUNT no longer triggers expensive joins or temporary B-trees
  • More stable performance on large accounts and older hardware
  • No functional changes to conversation visibility or ordering
  • Paging logic remains correct because only the real SELECT determines item membership

Trade-offs

  • COUNT may return a slightly higher number than the actual number of displayed conversations
  • Search queries still use the full COUNT because search relies on fields only available in the ConversationDetails view

Technical Notes

Below is the lightweight COUNT query introduced:

SELECT COUNT(*)
FROM Conversation
WHERE
    type IS NOT 'SELF'
    AND CASE
        WHEN :conversationFilter = 'ALL' THEN 1 = 1
        WHEN :conversationFilter = 'GROUPS' THEN (type = 'GROUP' AND is_channel = 0)
        WHEN :conversationFilter = 'ONE_ON_ONE' THEN type = 'ONE_ON_ONE'
        WHEN :conversationFilter = 'CHANNELS' THEN (type = 'GROUP' AND is_channel = 1)
        ELSE 1 = 0
    END
    AND archived = :fromArchive
    AND deleted_locally = 0
    AND (
        protocol IN ('PROTEUS','MIXED')
        OR (
            protocol = 'MLS'
            AND (
                :strict_mls = 0
                OR mls_group_state IN ('ESTABLISHED','PENDING_AFTER_RESET')
            )
        )
    );

6. Explicit API Mode Migration for Logic Module

Date: 2025-12-23

Status

Accepted

Context

The Kalium logic module serves as the main SDK entry point that client applications interact with. Without explicit visibility modifiers, it was difficult to distinguish between public API surface (intended for consumers) and internal implementation details. This led to several issues:

  1. Accidental API Exposure: Internal implementation classes and functions were inadvertently exposed to consumers, creating unintended API contracts
  2. API Maintenance Burden: Changes to internal implementations could break consumers who were using APIs that were never intended to be public
  3. Unclear API Boundaries: Developers couldn't easily distinguish between stable public APIs and internal utilities
  4. cryptoTransactionProvider Leakage: Critical internal components like CryptoTransactionProvider were being directly accessed by sample applications, bypassing proper API abstractions

Kotlin's explicitApi() mode enforces that all public declarations have explicit visibility modifiers and return types, forcing intentional decisions about API surface.

Decision

We enabled explicitApi() mode for the :logic module and adopted an internal-first migration strategy:

  1. Mark Everything as Internal: Used automated scripts to add internal visibility modifiers to all declarations lacking explicit visibility (~3,449 declarations)
  2. Fix Interface Members: Removed incorrectly added internal modifiers from interface members (1,278 instances), as they inherit visibility from the interface
  3. Selective Public Exposure: Only made APIs public when consumer modules failed to compile, ensuring minimal API surface
  4. Create Public Wrappers: For internal components that needed controlled access (like cryptoTransactionProvider), created public wrapper methods in appropriate scopes (e.g., DebugScope.refillKeyPackages(), DebugScope.generateEvents())

Implementation Details

  • Created Python scripts for automated migration to handle:

    • Adding internal modifiers while skipping override functions
    • Removing invalid internal from interface members
    • Preserving proper indentation and code structure
  • Used @InternalKaliumApi opt-in annotation for:

    • Test code accessing internal APIs within the same module
    • Debug utilities that need controlled internal access
    • Explicitly marking intentionally exposed internal APIs

Consequences

  • Minimal Public API: Only APIs that are actually used by consumers are public
  • Clear API Boundaries: Developers can immediately distinguish public SDK APIs from internal implementation details
  • Prevented Leakage: Critical internal components like CryptoTransactionProvider are now properly encapsulated behind public wrappers
  • Better Encapsulation: Internal implementations can be refactored without breaking consumers
  • Improved Documentation: The public API surface is now clearly defined and easier to document
  • Compiler-Enforced Contracts: Future additions must explicitly declare visibility, preventing accidental API exposure

Costs:

  • Verbose Code: All declarations now require explicit visibility modifiers

Migration Notes

For consumers experiencing compilation errors:

  1. Check if the API should be public (common use case) - submit an issue/PR to expose it
  2. For sample/test code, use @OptIn(InternalKaliumApi::class) if the internal API is intentionally marked with @InternalKaliumApi