Architecture Overview: Three-Layer Pattern

The codebase uses a three-layer architecture (similar to Clean Architecture / Layered Architecture) to separate concerns:

┌─────────────────────────────────────────────────────┐
│               LAYER 3: NETWORKING (axum)            │
│  └── /services/projects/networking/axum/src/api/    │
│                     projects/create.rs              │
│                                                     │
│  Responsibility:                                    │
│  - HTTP endpoint handling (axum extractors)         │
│  - Authentication/Authorization (JWT tokens)        │
│  - Request/Response serialization (JSON via Axum)   │
│  - Calls core layer                                 │
└─────────────────────────────────────────────────────┘
                         │
                         ▼
┌─────────────────────────────────────────────────────┐
│               LAYER 2: CORE (Business Logic)        │
│  └── /services/projects/core/src/api/projects/      │
│                     create.rs                       │
│                                                     │
│  Responsibility:                                    │
│  - Business logic validation                        │
│  - Orchestration of multiple operations             │
│  - Converts domain models to/from DAL               │
│  - No HTTP/web framework knowledge                  │
└────────────────────────┬────────────────────────────┘
                         │
                         ▼
┌─────────────────────────────────────────────────────┐
│               LAYER 1: DAL (Data Access Layer)      │
│  └── /layers/dal/src/models/projects/               │
│        ├── tx_definitions.rs                        │
│        └── postgres_txs.rs                          │
│                                                     │
│  Responsibility:                                    │
│  - Raw SQL queries                                  │
│  - Database transaction management                  │
│  - Zero business logic                              │
└───────────┬─────────────────────────────────────────┘
            │
            ▼
┌───────────────────────┐
│     POSTGRESQL        │
│      DATABASE         │
└───────────────────────┘

How Each File Fits In

1. DAL Layer: tx_definitions.rs + postgres_txs.rs

tx_definitions.rs — Defines the traits that abstract database operations:

define_dal_transactions!(
    GetProjectsByDepartmentId => get_projects_by_department_id(department_id: i32) -> Vec<Project>,
    CreateProject => create_project(project: NewProject) -> Project,
    DeleteProject => delete_project(project_id: i32, dept_id: i32) -> bool,
    CheckUserProjectAccess => check_user_project_access(user_id: i32, project_id: i32) -> bool,
    GetProjectById => get_project_by_id(project_id: i32) -> Option<Project>
);

This expands to traits like:

pub trait CreateProject {
    fn create_project(project: NewProject) -> impl Future<Output = sqlx::Result<Project>> + Send;
}

See also: define_dal_transactions!

postgres_txs.rs — Implements those traits with actual SQL:

#[db_transaction(SqlxPostGresDescriptor, CreateProject)]
async fn create_project(project: NewProject) -> Project {
    let pool = T::yield_pool();
    let query = r#"
        INSERT INTO projects (department_id, name, description, created_at, updated_at)
        VALUES ($1, $2, $3, NOW(), NOW())
        RETURNING id, department_id, name, description, created_at, updated_at
    "#;
    sqlx::query_as::<_, Project>(query)
        .bind(project.department_id)
        .bind(project.name)
        .bind(project.description)
        .fetch_one(pool)
        .await
}

The #[db_transaction(StructName, TraitName)] macro:

  1. Generates an impl TraitName for StructName<T> where T: YieldPostGresPool
  2. Wraps the async function body in that implementation
  3. Makes the function callable as StructName::<PoolType>::create_project(...)

2. Core Layer: core/src/api/projects/create.rs

This layer orchestrates the business logic:

pub async fn create_project<X, S>(storage_handle: &S, new_project: NewProject) -> Result<Project, NanoServiceError>
where
    X: CreateProject + ProjectBranchesCreateBranch,
    S: GitDataTransfer + Debug,
{
    // 1. VALIDATION (business rule)
    if new_project.department_id <= 0 {
        return Err(NanoServiceError::bad_request("Invalid department ID".to_string()));
    }
    if new_project.name.trim().is_empty() {
        return Err(NanoServiceError::bad_request("Project name cannot be empty".to_string()));
    }
    if new_project.description.trim().is_empty() {
        return Err(NanoServiceError::bad_request("Project description cannot be empty".to_string()));
    }
 
    // 2. Create project in database
    let created_project = X::create_project(new_project).await?;
 
    // 3. Create git directory (side effect)
    create_git_repo(storage_handle, created_project.id).await?;
 
    // 4. Register default branch
    let new_branch = NewProjectBranch { project_id: created_project.id, branch: "main".into() };
    X::create_branch(new_branch).await.map_err(|e| NanoServiceError::unknown(e.to_string()))?;
 
    Ok(created_project)
}

Key characteristics:

  • No HTTP/Websocket knowledge — pure async functions
  • Generic over database handle (X: CreateProject) — allows mocking for tests
  • Validates business rules before touching the database
  • Orchestrates multiple operations (create project + git repo + branch)

3. Networking Layer: networking/axum/src/api/projects/create.rs

This layer adapts the core to HTTP:

pub async fn create_project<T, X, Y>(
    token: HeaderToken<X, NoRoleCheck, T>,  // Auth extraction
    Json(payload): Json<NewProjectRequest>, // JSON deserialization
) -> Result<impl IntoResponse, NanoServiceError>
where
    T: CreateProject + GetProjectsByDepartmentId + PingAuthSession + ProjectBranchesCreateBranch,
    X: GetConfigVariable,
    Y: YieldPostGresPool + Send + Sync + Clone + Debug,
{
    // 1. Extract department from JWT
    let department_id = token.get_department_id()?;
 
    // 2. Convert request DTO to domain model
    let new_project = NewProject {
        department_id,
        name: payload.name,
        description: payload.description
    };
 
    // 3. Create git storage handle
    let storage_handle = PostgresGitBlobHandle::<Y>::new();
 
    // 4. Call core business logic
    let _ = create_project_core::<T, _>(&storage_handle, new_project).await?;
 
    // 5. Return updated list
    let projects = get_projects_by_department_id_core::<T>(department_id).await?;
    Ok((StatusCode::CREATED, Json(projects)))
}

Key characteristics:

  • Axum extractors handle HTTP parsing
  • Authentication via JWT token validation
  • Converts between request types (NewProjectRequestNewProject)
  • Handles HTTP concerns (status codes, JSON serialization)

Complete Workflow

Client Request
      │
      ▼
 ┌─────────────────────────────────────────────────────────────────┐
 │ 1. HTTP REQUEST arrives at axum endpoint                        │
 │    POST /api/v1/projects/create                                 │
 │    Headers: Authorization: Bearer <jwt>                         │
 │    Body: { "name": "...", "description": "..." }                │
 └─────────────────────────────────────────────────────────────────┘
      │
      ▼
 ┌─────────────────────────────────────────────────────────────────┐
 │ 2. AXUM LAYER (networking/axum)                                 │
 │    - Extracts and validates JWT token                           │
 │    - Deserializes JSON payload                                  │
 │    - Converts NewProjectRequest → NewProject                    │
 │    - Creates PostgresGitBlobHandle                              │
 │    - Calls create_project_core()                                │
 └─────────────────────────────────────────────────────────────────┘
      │
      ▼
 ┌─────────────────────────────────────────────────────────────────┐
 │ 3. CORE LAYER (core/api)                                        │
 │    - Validates department_id > 0                                │
 │    - Validates name is not empty                                │
 │    - Validates description is not empty                         │
 │    - Calls DAL: T::create_project()                             │
 │    - Calls git repo creation (storage_handle)                   │
 │    - Calls DAL: T::create_branch()                              │
 │    - Returns Project model                                      │
 └─────────────────────────────────────────────────────────────────┘
      │
      ▼
 ┌─────────────────────────────────────────────────────────────────┐
 │ 4. DAL LAYER (dal/models)                                       │
 │    - tx_definitions.rs: defines CreateProject trait             │
 │    - postgres_txs.rs:                                           │
 │        #[db_transaction(Struct, Trait)]                         │
 │        async fn create_project() -> SQL INSERT + RETURNING      │
 │    - SqlxPostGresDescriptor implements the trait                │
 │    - SQL executed against PostgreSQL                            │
 └─────────────────────────────────────────────────────────────────┘
      │
      ▼
 ┌─────────────────────────────────────────────────────────────────┐
 │ 5. DATABASE (PostgreSQL)                                        │
 │    INSERT INTO projects (...) VALUES (...)                      │
 │    RETURNING id, department_id, name, description, ...          │
 └─────────────────────────────────────────────────────────────────┘
      │
      ▼
 Return back up the stack with created Project

Data Flow Diagram

┌─────────────┐     HTTP JSON     ┌─────────────┐    NewProject    ┌─────────────┐
│  Client     │ ────────────────► │  networking │ ───────────────► │    core     │
│             │                   │    (axum)   │                  │ (create)    │
└─────────────┘                   └─────────────┘                  └─────────────┘
                                                                  │
                                      ┌───────────────────────────┼─────────────────────┐
                                      │                           │                     │
                                      ▼                           ▼                     ▼
                           ┌──────────────────┐   ┌──────────────────┐   ┌──────────────────┐
                           │DAL: CreateProject│   │GitDataTransfer   │   │DAL: CreateBranch │
                           │(sqlx INSERT)     │   │(create git dir)  │   │(sqlx INSERT)     │
                           └──────────────────┘   └──────────────────┘   └──────────────────┘
                                      │                           │                     │
                                      ▼                           ▼                     ▼
                           ┌──────────────────┐   ┌──────────────────┐   ┌──────────────────┐
                           │   PostgreSQL     │   │   Database       │   │   PostgreSQL     │
                           │   projects       │   │   git_blobs      │   │ project_branches │
                           └──────────────────┘   └──────────────────┘   └──────────────────┘

Pros and Cons of This Approach

✅ Pros

BenefitExplanation
Separation of ConcernsEach layer has a single responsibility. DAL knows SQL, Core knows business logic, Networking knows HTTP.
TestabilityCore layer can be tested with mock DB handles (MockDeadPostGresPool) without any HTTP server. No network needed for unit tests.
Database AbstractionThe trait-based DAL allows swapping PostgreSQL for another database (though not currently used).
ReusabilityCore layer functions can be called from HTTP, WebSocket, gRPC, CLI, or tests — not coupled to HTTP.
ConsistencyAll endpoints follow the same pattern — predictable codebase structure.
Swappable NetworkingAxum could be swapped for Actix-web or Hyper with minimal core changes.
Clear BoundariesEasy to identify where bugs live: HTTP issue → networking, business logic → core, SQL → DAL.

❌ Cons

IssueExplanation
Boilerplate OverheadThree files per feature with traits, macros, and adapters creates ceremony. A simple CRUD operation requires significant scaffolding.
Generic ProliferationEvery function has 3+ generic type parameters (<T, X, Y>) making signatures hard to read and IDE autocomplete overwhelming.
Tight Coupling via TraitsThe where X: CreateProject + GetProjectsByDepartmentId + ... clauses require implementing many traits, creating coupling between networking and DAL layers.
No Transaction Across LayersThe create_project core function calls multiple DAL operations that aren’t wrapped in a DB transaction. If create_git_repo fails, the project row was already committed.
Hidden Complexity in Macros#[db_transaction] and define_dal_transactions! are magical — hard to debug, IDE can’t “go to definition” easily.
Request/Response Type ProliferationNewProjectRequest (HTTP layer) → NewProject (Core layer) → NewProject (DAL) is mostly the same struct with different names.
Hard to Follow the FlowNew developers must trace through 3 files + 2 macros to understand how a simple INSERT works.
Over-engineering for Simple OpsFor a simple SELECT * FROM projects, you still need the full three-layer setup.

Key Files Summary

FileRoleKey Pattern
layers/dal/src/models/projects/tx_definitions.rsDefines trait signatures for DB operationsdefine_dal_transactions! macro
layers/dal/src/models/projects/postgres_txs.rsImplements the trait with SQL#[db_transaction(Struct, Trait)] proc macro
services/projects/core/src/api/projects/create.rsBusiness logic orchestrationValidation → DAL calls → Return
services/projects/networking/axum/src/api/projects/create.rsHTTP adapter layerAxum extractors → Call core → HTTP response

Testability Example

The beauty of this pattern is shown in the core tests:

// Core layer test with MOCK database — no real DB needed
#[db_transaction(MockDbHandle, CreateProject)]
async fn create_project(new_project: NewProject) -> Project {
    Ok(Project { id: 1, ... }) // Mocked response
}
 
let result = create_project::<MockDbHandle<MockDeadPostGrosPool>, _>(
    &mock_git_handle,
    new_project,
)
.await;

This lets you test business logic validation and orchestration without spinning up a PostgreSQL instance.