unified server, composition rules

This commit is contained in:
naudachu 2026-03-19 21:29:16 +05:00
parent de4efe97bf
commit 7cbef32c21
9 changed files with 1550 additions and 41 deletions

View File

@ -21,8 +21,9 @@ Before performing ANY operation, read ALL reference files to have the complete r
6. Read `~/.claude/skills/threedots/references/rules-ports.md` 6. Read `~/.claude/skills/threedots/references/rules-ports.md`
7. Read `~/.claude/skills/threedots/references/rules-naming.md` 7. Read `~/.claude/skills/threedots/references/rules-naming.md`
8. Read `~/.claude/skills/threedots/references/rules-codestyle.md` 8. Read `~/.claude/skills/threedots/references/rules-codestyle.md`
9. Read `~/.claude/skills/threedots/references/rules-watermill.md`
Read all 8 files in parallel before proceeding. Read all 9 files in parallel before proceeding.
## Argument Parsing ## Argument Parsing
@ -35,6 +36,10 @@ Parse the user's arguments:
- **`scaffold query <Name>`**: Generate query handler file - **`scaffold query <Name>`**: Generate query handler file
- **`scaffold entity <Name>`**: Generate domain entity file - **`scaffold entity <Name>`**: Generate domain entity file
- **`scaffold repo <Name>`**: Generate repository interface + memory implementation - **`scaffold repo <Name>`**: Generate repository interface + memory implementation
- **`scaffold unified_server`**: Generate unified server with named components, OnShutdown, With* options
- **`scaffold watermill_router`**: Generate WithWatermillRouter option + publisher client
- **`scaffold event_handler <Name>`**: Generate event handler port (inbound Watermill adapter)
- **`scaffold event_publisher <Name>`**: Generate event publisher adapter (outbound Watermill adapter)
If arguments don't match any pattern, show usage help. If arguments don't match any pattern, show usage help.
@ -56,12 +61,13 @@ For each rule category, scan the relevant files:
| Category | Scan targets | | Category | Scan targets |
|----------|-------------| |----------|-------------|
| Architecture (ARCH-01..03) | Directory structure, all `.go` file imports | | Architecture (ARCH-01..08) | Directory structure, all `.go` file imports, `service/`, `main.go` |
| Watermill (WM-01..10) | `main.go`, `server/watermill.go`, `client/watermill.go`, `ports/event.go`, `adapters/*event*.go`, `app/command/services.go` |
| Domain (DOM-01..09) | All files in `domain/` | | Domain (DOM-01..09) | All files in `domain/` |
| CQRS (CQRS-01..10) | Files in `app/command/`, `app/query/`, `app/app.go` | | CQRS (CQRS-01..10) | Files in `app/command/`, `app/query/`, `app/app.go` |
| Repository (REPO-01..07) | Files in `domain/` (interfaces) and `adapters/` (implementations) | | Repository (REPO-01..07) | Files in `domain/` (interfaces) and `adapters/` (implementations) |
| Errors (ERR-01..05) | All files in `domain/`, error-related files | | Errors (ERR-01..05) | All files in `domain/`, error-related files |
| Ports (PORT-01..05) | Files in `ports/` | | Ports (PORT-01..06) | Files in `ports/` |
| Naming (NAME-*) | All `.go` files — function names, type names | | Naming (NAME-*) | All `.go` files — function names, type names |
| Code Style (STYLE-01..08) | All `.go` files, `_test.go` files | | Code Style (STYLE-01..08) | All `.go` files, `_test.go` files |
@ -89,7 +95,7 @@ CRITICAL: N violations
WARNING: N violations WARNING: N violations
INFO: N violations INFO: N violations
Conformance: X/35 rules passing Conformance: X/45 rules passing
Top priorities: Top priorities:
1. [RULE-ID]: brief description of most impactful fix 1. [RULE-ID]: brief description of most impactful fix
@ -120,6 +126,10 @@ Read the appropriate template from `~/.claude/skills/threedots/templates/`:
| `query` | `templates/query.md` | | `query` | `templates/query.md` |
| `entity` | `templates/entity.md` | | `entity` | `templates/entity.md` |
| `repo` | `templates/repo.md` | | `repo` | `templates/repo.md` |
| `unified_server` | `templates/unified_server.md` |
| `watermill_router` | `templates/watermill_router.md` |
| `event_handler` | `templates/event_handler.md` |
| `event_publisher` | `templates/event_publisher.md` |
### Step 3 — Substitute and Create ### Step 3 — Substitute and Create
@ -141,7 +151,12 @@ Create the files using the Write tool. After creation, list what was created and
|----|------|----------| |----|------|----------|
| ARCH-01 | Standard directory layout: domain/, app/{command,query}, ports/, adapters/, service/ | CRITICAL | | ARCH-01 | Standard directory layout: domain/, app/{command,query}, ports/, adapters/, service/ | CRITICAL |
| ARCH-02 | Dependency direction: domain ← app ← ports/adapters; domain imports NOTHING from app/ports/adapters | CRITICAL | | ARCH-02 | Dependency direction: domain ← app ← ports/adapters; domain imports NOTHING from app/ports/adapters | CRITICAL |
| ARCH-03 | Composition root in service/ wires all dependencies | WARNING | | ARCH-03 | Composition root isolation — only service/ knows concrete adapters and infra | CRITICAL |
| ARCH-04 | Dual constructor pattern — shared private wiring, prod + test constructors | WARNING |
| ARCH-05 | Cleanup function returned from NewApplication for resource lifecycle | WARNING |
| ARCH-06 | Server startup via callback — main.go provides handler, never configures internals | WARNING |
| ARCH-07 | Composition root must not own server lifecycle — no servers, listeners, signals in service/ | CRITICAL |
| ARCH-08 | Unified server with named components and OnShutdown — explicit shutdown ordering | WARNING |
| DOM-01 | All entity fields private (unexported) | CRITICAL | | DOM-01 | All entity fields private (unexported) | CRITICAL |
| DOM-02 | Factory constructors: New{Type}(...) (*Type, error) | WARNING | | DOM-02 | Factory constructors: New{Type}(...) (*Type, error) | WARNING |
| DOM-03 | MustNew{Type} panics on error, for tests/init | INFO | | DOM-03 | MustNew{Type} panics on error, for tests/init | INFO |
@ -178,6 +193,7 @@ Create the files using the Write tool. After creation, list what was created and
| PORT-03 | Auth extracted from context, not parsed in handler | WARNING | | PORT-03 | Auth extracted from context, not parsed in handler | WARNING |
| PORT-04 | No business logic in port handlers — only marshal/unmarshal + delegate | CRITICAL | | PORT-04 | No business logic in port handlers — only marshal/unmarshal + delegate | CRITICAL |
| PORT-05 | Response model mapping functions separate from handlers | INFO | | PORT-05 | Response model mapping functions separate from handlers | INFO |
| PORT-06 | No Unimplemented embedding in gRPC servers — compile-time compliance | CRITICAL |
| STYLE-01 | Import groups: stdlib, blank line, external packages | INFO | | STYLE-01 | Import groups: stdlib, blank line, external packages | INFO |
| STYLE-02 | Pointer receivers for mutation, value for reads | INFO | | STYLE-02 | Pointer receivers for mutation, value for reads | INFO |
| STYLE-03 | t.Parallel() as first line in every test | WARNING | | STYLE-03 | t.Parallel() as first line in every test | WARNING |
@ -186,3 +202,13 @@ Create the files using the Write tool. After creation, list what was created and
| STYLE-06 | Table-driven tests with named cases | INFO | | STYLE-06 | Table-driven tests with named cases | INFO |
| STYLE-07 | Interfaces defined where consumed, not where implemented | WARNING | | STYLE-07 | Interfaces defined where consumed, not where implemented | WARNING |
| STYLE-08 | context.Context as first parameter for I/O methods | WARNING | | STYLE-08 | context.Context as first parameter for I/O methods | WARNING |
| WM-01 | Router factory via callback — same pattern as gRPC/HTTP | CRITICAL |
| WM-02 | Publisher factory returns (Publisher, Close, Error) triple | CRITICAL |
| WM-03 | Event handlers live in ports/ — same as HTTP/gRPC handlers | CRITICAL |
| WM-04 | Event publisher adapter implements domain interface | WARNING |
| WM-05 | Topic naming uses domain language with dot notation | WARNING |
| WM-06 | Event structs live in ports/ or adapters/, not domain/ | INFO |
| WM-07 | Watermill middleware in server factory only | WARNING |
| WM-08 | Publisher cleanup in composition root cleanup function | WARNING |
| WM-09 | Named components replace SERVER_TO_RUN switch | INFO |
| WM-10 | No sync side effects replaced by fire-and-forget without saga | CRITICAL |

View File

@ -1,4 +1,4 @@
# Architecture Rules (ARCH-01..03) # Architecture Rules (ARCH-01..08)
## ARCH-01: Standard Directory Layout (CRITICAL) ## ARCH-01: Standard Directory Layout (CRITICAL)
@ -62,49 +62,419 @@ The app layer MUST NOT import from:
--- ---
## ARCH-03: Composition Root in service/ (WARNING) ## ARCH-03: Composition Root Isolation (CRITICAL)
All dependency wiring MUST happen in `service/application.go` (or equivalent in `service/`). All dependency wiring MUST happen exclusively in `service/`. The composition root is the **only** place that knows about concrete adapter types, infrastructure clients, and how dependencies connect.
This file: **`main.go`** MUST only:
- Creates infrastructure clients (database connections, external service clients) 1. Initialize cross-cutting concerns (logging)
- Instantiates adapters (repositories, gRPC clients) 2. Call `service.NewApplication()`
- Instantiates command/query handlers with their dependencies 3. Wire ports (pass `app.Application` to port constructors)
- Returns a fully-wired `app.Application` struct 4. Start the server
- Is the ONLY place that knows about concrete adapter types
`main.go` MUST NOT import `adapters/`, create infrastructure clients, or instantiate command/query handlers directly.
**Check procedure:** **Check procedure:**
1. Look for `service/` directory and `application.go` or similar 1. Scan `main.go` imports — flag any reference to `adapters/`, database drivers, or external service clients
2. Verify it returns `app.Application` 2. Scan all files outside `service/` — flag any call to adapter constructors (e.g., `adapters.New*`)
3. Check that adapter constructors are NOT called outside `service/` 3. Verify `service/` returns `app.Application`
**Reference:** **Correct:**
```go ```go
// service/application.go // main.go — only knows about service and ports
func NewApplication(ctx context.Context) app.Application { func main() {
// Create infrastructure logs.Init()
firestoreClient, err := firestore.NewClient(ctx, os.Getenv("GCP_PROJECT")) ctx := context.Background()
app, cleanup := service.NewApplication(ctx)
defer cleanup()
server.RunHTTPServer(func(router chi.Router) http.Handler {
return ports.HandlerFromMux(ports.NewHttpServer(app), router)
})
}
```
**Wrong:**
```go
// main.go — VIOLATION: wiring infrastructure directly
func main() {
client, _ := firestore.NewClient(ctx, os.Getenv("GCP_PROJECT")) // VIOLATION
repo := adapters.NewFirestoreRepository(client) // VIOLATION
handler := command.NewScheduleTrainingHandler(repo, logger, mc) // VIOLATION
// ... // ...
}
```
// Create domain factories ---
hourFactory, err := hour.NewFactory(hour.FactoryConfig{...})
// Create adapters ## ARCH-04: Dual Constructor Pattern for Testability (WARNING)
hourRepository := adapters.NewFirestoreHourRepository(firestoreClient, hourFactory)
// Wire application The composition root MUST provide two constructors sharing a single private wiring function:
logger := logrus.NewEntry(logrus.StandardLogger()) 1. **`NewApplication(ctx) (app.Application, func())`** — production constructor, creates real infrastructure
metricsClient := metrics.NoOp{} 2. **`NewComponentTestApplication(ctx) app.Application`** — test constructor, injects mocks/stubs
Both MUST delegate to a **private** `newApplication(...)` that accepts dependencies as interfaces, so the real vs test paths only differ in what they pass in.
This ensures:
- Test mocks never leak into production wiring
- All wiring logic is shared — no drift between prod and test setups
- The private function signature documents the full set of external dependencies
**Check procedure:**
1. Look for exported `NewApplication` and `NewComponentTestApplication` in `service/`
2. Verify both call the same unexported function
3. The unexported function MUST accept dependencies as interfaces, not concrete types
**Correct:**
```go
// service/service.go
func NewApplication(ctx context.Context) (app.Application, func()) {
trainerClient, closeTrainer, err := client.NewTrainerClient()
if err != nil { panic(err) }
trainerService := adapters.NewTrainerGrpc(trainerClient)
return newApplication(ctx, trainerService),
func() { _ = closeTrainer() }
}
func NewComponentTestApplication(ctx context.Context) app.Application {
return newApplication(ctx, TrainerServiceMock{})
}
func newApplication(ctx context.Context, trainerService command.TrainerService) app.Application {
// shared wiring logic — accepts interfaces, not concrete types
repo := adapters.NewFirestoreRepository(client)
return app.Application{ /* ... */ }
}
```
**Wrong:**
```go
// VIOLATION: separate wiring paths, no shared private function
func NewApplication(ctx context.Context) app.Application {
repo := adapters.NewFirestoreRepository(client)
return app.Application{ return app.Application{
Commands: app.Commands{ Commands: app.Commands{
CancelTraining: command.NewCancelTrainingHandler(hourRepository, logger, metricsClient), ScheduleTraining: command.NewScheduleTrainingHandler(repo, logger, mc),
ScheduleTraining: command.NewScheduleTrainingHandler(hourRepository, logger, metricsClient),
}, },
Queries: app.Queries{ }
HourAvailability: query.NewHourAvailabilityHandler(hourRepository, logger, metricsClient), }
TrainerAvailableHours: query.NewAvailableHoursHandler(datesRepository, logger, metricsClient),
func NewTestApplication() app.Application {
repo := NewMockRepo() // VIOLATION: duplicated wiring, can drift
return app.Application{
Commands: app.Commands{
ScheduleTraining: command.NewScheduleTrainingHandler(repo, logger, mc),
}, },
} }
} }
``` ```
---
## ARCH-05: Cleanup Function for Resource Lifecycle (WARNING)
When the composition root creates resources that require cleanup (connections, clients, subscriptions), `NewApplication` MUST return a cleanup function alongside the application. The caller owns the lifecycle via `defer`.
This ensures:
- Resources are released even on panic
- `main.go` doesn't need to know *what* to clean up — just *that* it must
- Adding new infrastructure only changes `service/`, not `main.go`
**Check procedure:**
1. If `NewApplication` creates closeable resources (clients, connections), it MUST return `func()`
2. `main.go` MUST call `defer cleanup()` immediately after receiving it
3. The cleanup function MUST NOT be ignored (assigned to `_`)
**Correct:**
```go
// service/service.go
func NewApplication(ctx context.Context) (app.Application, func()) {
trainerClient, closeTrainer, err := client.NewTrainerClient()
if err != nil { panic(err) }
usersClient, closeUsers, err := client.NewUsersClient()
if err != nil { panic(err) }
return newApplication(ctx, adapters.NewTrainerGrpc(trainerClient), adapters.NewUsersGrpc(usersClient)),
func() {
_ = closeTrainer()
_ = closeUsers()
}
}
// main.go
app, cleanup := service.NewApplication(ctx)
defer cleanup()
```
**Wrong:**
```go
// VIOLATION: caller must know internals to clean up
func NewApplication(ctx context.Context) (app.Application, *firestore.Client, *grpc.ClientConn) {
// ...
}
// VIOLATION: cleanup responsibility leaks into main
app, fsClient, conn := service.NewApplication(ctx)
defer fsClient.Close() // main.go shouldn't know about Firestore
defer conn.Close() // main.go shouldn't know about gRPC
```
---
## ARCH-06: Server Startup via Callback (WARNING)
Server startup MUST be delegated to a shared `server.Run*Server()` function. `main.go` provides **only the application handler** via a callback. It MUST NOT configure server internals: middleware, routing, listening address, or transport-level concerns.
This ensures:
- Middleware stack (auth, logging, recovery, CORS, security headers) is consistent across all services
- Adding or changing middleware is a single change, not per-service
- `main.go` remains a thin orchestrator: init → wire app → provide handler → run
**Check procedure:**
1. `main.go` MUST call a shared `Run*Server()` function as the final blocking call
2. The callback passed to `Run*Server()` MUST only construct the handler from port constructors — no middleware setup, no router configuration, no listener creation
3. `main.go` MUST NOT import server infrastructure packages (e.g., `net/http.ListenAndServe`, `net.Listen`, middleware libraries)
**Correct:**
```go
// main.go — provides handler, delegates everything else
func main() {
logs.Init()
ctx := context.Background()
app, cleanup := service.NewApplication(ctx)
defer cleanup()
server.RunHTTPServer(func(router chi.Router) http.Handler {
return ports.HandlerFromMux(ports.NewHttpServer(app), router)
})
}
```
**Wrong:**
```go
// VIOLATION: main.go configures server internals
func main() {
app, cleanup := service.NewApplication(ctx)
defer cleanup()
router := chi.NewRouter()
router.Use(middleware.Logger) // VIOLATION: middleware in main
router.Use(middleware.Recoverer) // VIOLATION: middleware in main
router.Mount("/api", ports.NewHttpServer(app))
http.ListenAndServe(":8080", router) // VIOLATION: listening in main
}
```
---
## ARCH-07: Composition Root Must Not Own Server Lifecycle (CRITICAL)
The `service/` package wires dependencies and returns `app.Application`. It MUST NOT create transport servers, bind to network ports, handle OS signals, or manage graceful shutdown. Server lifecycle is a **separate concern** that belongs in a shared server package or the entry point.
`service/` MUST NOT:
- Create transport servers (`grpc.NewServer()`, `http.Server{}`, `message.NewRouter()`)
- Bind to network ports (`net.Listen()`)
- Handle OS signals (`signal.NotifyContext()`, `signal.Notify()`)
- Manage graceful shutdown (`GracefulStop()`, `router.Close()`)
- Import port packages (`ports/grpc`, `ports/amqp`, `ports/http`)
`service/` MUST only:
- Create infrastructure clients and adapters
- Wire command/query handlers with dependencies
- Return `app.Application` (and optionally a cleanup function)
**Check procedure:**
1. Scan all files in `service/` for imports of `net`, `os/signal`, `syscall`, transport packages, or `ports/`
2. Flag any function in `service/` that accepts or creates a server, listener, or router
3. A file named `server.go` in `service/` is a strong signal of violation
**Correct:**
```go
// service/service.go — only wires the application
func NewApplication(ctx context.Context, cfg *config.Config) (app.Application, func()) {
repo := adapters.NewFirestoreRepository(client)
syncer := tokensync.NewSyncer(fetchers, syncRepo, progressTracker)
return newApplication(repo, syncer),
func() { _ = client.Close() }
}
// Server lifecycle lives elsewhere (shared server package or entry point)
```
**Wrong:**
```go
// service/server.go — VIOLATION: server lifecycle in composition root
func RunServer(application app.Application, cfg *config.Config) error {
ctx, stop := signal.NotifyContext(context.Background(), ...) // VIOLATION: signal handling
defer stop()
grpcServer := grpc.NewServer() // VIOLATION: transport server
pb.RegisterCommandsServer(grpcServer, ports.NewServer(app)) // VIOLATION: imports ports/
lis, _ := net.Listen("tcp", fmt.Sprintf(":%s", cfg.Port)) // VIOLATION: network binding
go grpcServer.Serve(lis) // VIOLATION: server lifecycle
<-ctx.Done()
grpcServer.GracefulStop() // VIOLATION: shutdown management
return nil
}
```
---
## ARCH-08: Unified Server with Named Components and OnShutdown (WARNING)
When a project has multiple transports (gRPC, HTTP, AMQP/Watermill), the shared server package SHOULD provide a **single `server.New(...).Run(ctx)`** with functional options per transport and an explicit `OnShutdown` that declares the shutdown sequence.
### Why explicit shutdown ordering matters
Different services have different dependency graphs between transports:
- A consumer that calls gRPC must stop consuming *before* gRPC clients close
- An HTTP API that publishes events must drain HTTP *before* the publisher closes
- Two independent ingress points (HTTP + gRPC) can shut down in parallel
Implicit ordering (LIFO based on registration) is fragile — reordering lines silently changes shutdown behavior. `OnShutdown` makes the sequence a readable, reviewable declaration.
### Core types
```go
// server/server.go
type Server struct {
components map[string]component
startOrder []string
shutdownSteps []ShutdownStep
}
type component struct {
name string
start func(ctx context.Context) error
stop func(ctx context.Context) error
}
type Option func(*Server)
type ShutdownStep struct {
componentNames []string
fn func(ctx context.Context) error
}
```
### API
```go
// Stop creates a step that stops named components.
// Multiple names = parallel shutdown within the step.
func Stop(names ...string) ShutdownStep
// StopFunc creates a step that runs an arbitrary cleanup function.
func StopFunc(fn func()) ShutdownStep
// StopFuncWithErr creates a step with error return.
func StopFuncWithErr(fn func(ctx context.Context) error) ShutdownStep
// OnShutdown declares the shutdown sequence.
// Steps execute top-to-bottom. Each step completes before the next starts.
// Components not mentioned stop last (with a warning log).
func OnShutdown(steps ...ShutdownStep) Option
```
### Shutdown execution
1. Steps execute sequentially in declaration order
2. Within a `Stop("a", "b")` call, components stop in parallel
3. Each step's `wg.Wait()` completes before the next step begins
4. Components not mentioned in any `Stop()` get a catch-all parallel stop after all explicit steps (with a warning log — every component should be in OnShutdown)
5. A global timeout (default 30s) bounds the entire sequence
### Key design principles
- Each `With*` option takes a `name string` as first argument — used in `Stop(name)` to reference it
- `OnShutdown` reads top-to-bottom as a shutdown script
- The factory owns `signal.NotifyContext` — callers never handle signals
- `defer cleanup()` from `NewApplication` naturally runs after `Run()` returns — it is the implicit last phase
- Duplicate component names panic at startup — caught immediately
**Check procedure:**
1. If a project uses 2+ transports, verify `server.New()` is used (not multiple `Run*Server` calls)
2. Verify `OnShutdown` is present and lists all components
3. Verify shutdown order makes sense: consumers before servers, servers before clients
4. No `signal.NotifyContext`, `net.Listen`, or `GracefulStop` calls outside `common/server/`
**Correct:**
```go
// Trainer: HTTP + gRPC + Watermill consumer
func main() {
logs.Init()
ctx := context.Background()
app, cleanup := service.NewApplication(ctx)
defer cleanup()
server.New(
server.WithWatermillRouter("events", func(r *message.Router, sub message.Subscriber) {
ports.RegisterEventHandlers(r, sub, app)
}),
server.WithHTTPHandler("api", func(router chi.Router) http.Handler {
return ports.HandlerFromMux(ports.NewHttpServer(app), router)
}),
server.WithGRPCServer("grpc", func(s *grpc.Server) {
trainer.RegisterTrainerServiceServer(s, ports.NewGrpcServer(app))
}),
server.OnShutdown(
server.Stop("events"), // 1. stop consuming
server.Stop("api", "grpc"), // 2. drain both servers in parallel
server.StopFunc(cleanup), // 3. close clients & publisher
),
).Run(ctx)
}
// Trainings: HTTP-only, publishes events (publisher in cleanup)
func main() {
logs.Init()
ctx := context.Background()
app, cleanup := service.NewApplication(ctx)
defer cleanup()
server.New(
server.WithHTTPHandler("api", func(router chi.Router) http.Handler {
return ports.HandlerFromMux(ports.NewHttpServer(app), router)
}),
server.OnShutdown(
server.Stop("api"), // 1. drain HTTP (in-flight may publish events)
server.StopFunc(cleanup), // 2. close publisher + gRPC clients
),
).Run(ctx)
}
```
**Wrong:**
```go
// VIOLATION: implicit LIFO ordering — fragile
server.New(
server.WithHTTPHandler("api", createHandler),
server.WithWatermillRouter("events", configureRouter),
// no OnShutdown — relies on registration order
).Run(ctx)
// VIOLATION: manual lifecycle per transport
func main() {
ctx, stop := signal.NotifyContext(context.Background(), syscall.SIGINT, syscall.SIGTERM)
grpcServer := grpc.NewServer()
go grpcServer.Serve(lis)
router, _ := message.NewRouter(...)
go router.Run(ctx)
<-ctx.Done()
grpcServer.GracefulStop()
router.Close()
}
```

View File

@ -1,4 +1,4 @@
# Port Rules (PORT-01..05) # Port Rules (PORT-01..06)
## PORT-01: Handler Struct Holds Application (WARNING) ## PORT-01: Handler Struct Holds Application (WARNING)
@ -146,3 +146,34 @@ func (h HttpServer) GetTrainerAvailableHours(w http.ResponseWriter, r *http.Requ
render.Respond(w, r, dates) render.Respond(w, r, dates)
} }
``` ```
---
## PORT-06: No Unimplemented Embedding in gRPC Servers (CRITICAL)
gRPC server structs MUST NOT embed `Unimplemented*Server` structs. Omitting the embed enforces **compile-time interface compliance** — if a new RPC is added to the proto definition, the code will fail to compile until the method is explicitly implemented.
Embedding `Unimplemented*Server` silently returns "unimplemented" at runtime for missing methods, hiding broken contracts until a request hits the missing endpoint in production.
**Correct:**
```go
type GrpcServer struct {
app app.Application
}
// Compile error if any RPC method from TrainerServiceServer is missing.
```
**Wrong:**
```go
type GrpcServer struct {
trainer.UnimplementedTrainerServiceServer // VIOLATION: hides missing methods at compile time
app app.Application
}
```
**Check:** Scan all structs in `ports/grpc.go` for embedded `Unimplemented*Server` fields. Any match is a CRITICAL violation.
**Proto generation:** When generating gRPC code, use `require_unimplemented_servers=false` to keep the interface strict:
```
protoc --go-grpc_out=require_unimplemented_servers=false:. *.proto
```

View File

@ -0,0 +1,407 @@
# Watermill Rules (WM-01..10)
## WM-01: Watermill as a Named Component in Unified Server (CRITICAL)
Watermill router MUST be registered as a named component via `server.WithWatermillRouter(name, configure)` — same pattern as `WithHTTPHandler` and `WithGRPCServer`. The `With*` option owns AMQP connection, middleware, and router lifecycle. The caller provides **only handler registration** via callback.
This ensures:
- Middleware stack (retry, correlation, recovery) is consistent across all services
- Broker config is centralized — swapping AMQP for Kafka changes one file
- Shutdown ordering is explicit via `server.OnShutdown(server.Stop(name))`
**Check procedure:**
1. Scan `main.go` for direct Watermill router creation (`message.NewRouter`, `amqp.NewSubscriber`)
2. Flag any middleware setup outside `server/watermill.go`
3. Verify Watermill component appears in `OnShutdown` with correct ordering
**Correct:**
```go
// internal/common/server/watermill.go
func WithWatermillRouter(
name string,
configure func(*message.Router, message.Subscriber),
) Option {
return func(s *Server) {
wmLogger := watermill.NewStdLoggerWithOut(os.Stdout, true, false)
amqpURI := os.Getenv("AMQP_URI")
amqpConfig := amqp.NewDurableQueueConfig(amqpURI)
sub, err := amqp.NewSubscriber(amqpConfig, wmLogger)
if err != nil { panic(err) }
r, err := message.NewRouter(message.RouterConfig{}, wmLogger)
if err != nil { panic(err) }
r.AddMiddleware(
wmMiddleware.CorrelationID,
wmMiddleware.Recoverer,
wmMiddleware.Retry{MaxRetries: 3}.Middleware,
)
configure(r, sub)
s.addComponent(name, component{
name: name,
start: func(ctx context.Context) error {
return r.Run(ctx)
},
stop: func(ctx context.Context) error {
return r.Close()
},
})
}
}
// main.go — registered as named component
server.New(
server.WithWatermillRouter("events", func(r *message.Router, sub message.Subscriber) {
ports.RegisterEventHandlers(r, sub, application)
}),
server.WithHTTPHandler("api", createHandler),
server.OnShutdown(
server.Stop("events"), // 1. stop consuming
server.Stop("api"), // 2. drain HTTP
server.StopFunc(cleanup), // 3. close clients
),
).Run(ctx)
```
**Wrong:**
```go
// main.go — VIOLATION: infrastructure in main
func main() {
sub, _ := amqp.NewSubscriber(amqpConfig, logger) // VIOLATION
r, _ := message.NewRouter(message.RouterConfig{}, logger) // VIOLATION
r.AddMiddleware(wmMiddleware.Recoverer) // VIOLATION
r.Run(context.Background())
}
// main.go — VIOLATION: standalone RunWatermillRouter without unified server
server.RunWatermillRouter(func(r *message.Router, sub message.Subscriber) { ... })
// Cannot coordinate shutdown with other transports
```
---
## WM-02: Publisher Factory Returns (Publisher, Close, Error) Triple (CRITICAL)
Publisher creation MUST follow the same `(client, closeFunc, error)` triple-return pattern as `client.NewTrainerClient()` and `client.NewUsersClient()`. Config comes from environment variables.
**Check procedure:**
1. Verify publisher factory in `internal/common/client/watermill.go`
2. Must return `(message.Publisher, func() error, error)`
3. Must read `AMQP_URI` from env
4. Error case must return a no-op close function, never nil
**Correct:**
```go
// internal/common/client/watermill.go
func NewWatermillPublisher() (pub message.Publisher, close func() error, err error) {
amqpURI := os.Getenv("AMQP_URI")
if amqpURI == "" {
return nil, func() error { return nil }, errors.New("empty env AMQP_URI")
}
logger := watermill.NewStdLoggerWithOut(os.Stdout, true, false)
config := amqp.NewDurableQueueConfig(amqpURI)
publisher, err := amqp.NewPublisher(config, logger)
if err != nil {
return nil, func() error { return nil }, errors.Wrap(err, "cannot create watermill publisher")
}
return publisher, publisher.Close, nil
}
```
**Wrong:**
```go
// VIOLATION: returns raw connection, no close function
func NewPublisher() *amqp.Publisher {
pub, _ := amqp.NewPublisher(config, logger)
return pub
}
// VIOLATION: nil close function on error path
func NewPublisher() (message.Publisher, func() error, error) {
// ...
return nil, nil, err // nil close panics on defer
}
```
---
## WM-03: Event Handlers Live in Ports (CRITICAL)
Watermill event handlers are **inbound adapters** — they are ports, just like HTTP and gRPC handlers. They MUST:
- Live in `ports/`
- Hold `app.Application`
- Delegate to command/query handlers
- Contain NO business logic
**Check procedure:**
1. Scan for `message.HandlerFunc` or `func(*message.Message) error` signatures
2. These MUST be in `ports/` package
3. Must import `app/`, `app/command/`, or `app/query/` — not `domain/` directly
4. Must follow the same delegation pattern as HTTP/gRPC handlers
**Correct:**
```go
// ports/event.go
type EventHandlers struct {
app app.Application
}
func RegisterEventHandlers(r *message.Router, sub message.Subscriber, application app.Application) {
handlers := EventHandlers{app: application}
r.AddNoPublisherHandler(
"OnTrainingScheduled",
"training.scheduled",
sub,
handlers.OnTrainingScheduled,
)
}
func (h EventHandlers) OnTrainingScheduled(msg *message.Message) error {
var event TrainingScheduledEvent
if err := json.Unmarshal(msg.Payload, &event); err != nil {
return err
}
return h.app.Commands.ScheduleTraining.Handle(
msg.Context(),
command.ScheduleTraining{Hour: event.Hour},
)
}
```
**Wrong:**
```go
// adapters/event_handler.go — VIOLATION: handler in adapters/
func HandleTrainingScheduled(msg *message.Message) error {
repo.Save(ctx, training) // VIOLATION: direct repo access
}
// app/command/schedule_training.go — VIOLATION: message parsing in app layer
func (h handler) Handle(ctx context.Context, msg *message.Message) error { ... }
```
---
## WM-04: Event Publisher Adapter Implements Domain Interface (WARNING)
Publishing events MUST go through an adapter that implements an interface defined in the app or domain layer. The app layer defines *what* events to publish; the adapter knows *how*.
This keeps Watermill as a swappable infrastructure detail.
**Check procedure:**
1. Look for `message.Publisher` usage — it MUST NOT appear in `app/` or `domain/`
2. An interface like `EventPublisher` should be in `app/command/services.go` or similar
3. The concrete adapter in `adapters/` implements it using Watermill
**Correct:**
```go
// app/command/services.go
type TrainingEventPublisher interface {
TrainingScheduled(ctx context.Context, t training.Training) error
TrainingCancelled(ctx context.Context, trainingUUID string) error
}
// adapters/training_event_publisher.go
type WatermillTrainingEventPublisher struct {
pub message.Publisher
}
func NewWatermillTrainingEventPublisher(pub message.Publisher) WatermillTrainingEventPublisher {
return WatermillTrainingEventPublisher{pub: pub}
}
func (p WatermillTrainingEventPublisher) TrainingScheduled(ctx context.Context, t training.Training) error {
payload, err := json.Marshal(TrainingScheduledEvent{UUID: t.UUID(), Hour: t.Time()})
if err != nil { return err }
msg := message.NewMessage(watermill.NewUUID(), payload)
middleware.SetCorrelationID(middleware.MessageCorrelationID(msg), msg)
return p.pub.Publish("training.scheduled", msg)
}
```
**Wrong:**
```go
// app/command/schedule_training.go — VIOLATION: Watermill in app layer
import "github.com/ThreeDotsLabs/watermill/message"
func (h handler) Handle(ctx context.Context, cmd ScheduleTraining) error {
msg := message.NewMessage(watermill.NewUUID(), payload) // VIOLATION
h.publisher.Publish("topic", msg) // VIOLATION: infra detail
}
```
---
## WM-05: Topic Naming Uses Domain Language (WARNING)
Topic/queue names MUST use domain language with dot notation: `{aggregate}.{past-tense-event}`. No CRUD names, no technical prefixes.
**Correct:**
```
training.scheduled
training.cancelled
training.reschedule_requested
hour.made_available
```
**Wrong:**
```
create-training // VIOLATION: CRUD name
events.training.created // VIOLATION: redundant "events" prefix, CRUD
TRAINING_QUEUE // VIOLATION: technical name, not domain event
```
---
## WM-06: Event Structs Live in the Publishing Port or Adapter (INFO)
Event DTOs (the JSON payloads) are protocol-specific — they belong in `ports/` or `adapters/`, NOT in `domain/`. Domain entities are the canonical model; events are a serialization concern.
**Check procedure:**
1. Look for event structs (e.g., `TrainingScheduledEvent`)
2. They MUST be in `ports/` (if consumed by event handlers) or `adapters/` (if produced by publisher adapters)
3. They MUST NOT be in `domain/`
**Correct:**
```go
// ports/event.go or adapters/training_event_publisher.go
type TrainingScheduledEvent struct {
UUID string `json:"uuid"`
Hour time.Time `json:"hour"`
}
```
---
## WM-07: Watermill Middleware in With* Option Only (WARNING)
Watermill middleware (retry, correlation ID, recoverer, throttle, etc.) MUST be configured exclusively inside the `WithWatermillRouter` option in `internal/common/server/watermill.go` — same principle as ARCH-06 for HTTP/gRPC middleware.
**Check procedure:**
1. Scan for `r.AddMiddleware` or `router.AddMiddleware` calls
2. All MUST be in `internal/common/server/watermill.go` (inside `WithWatermillRouter`)
3. Flag any middleware setup in `main.go`, `ports/`, or `service/`
---
## WM-08: Publisher Cleanup via OnShutdown or Composition Root (WARNING)
When a service publishes events, the publisher's close function MUST be closed as part of the shutdown sequence. Two valid patterns:
**Pattern A — cleanup in OnShutdown (preferred when using unified server):**
```go
server.New(
server.WithHTTPHandler("api", createHandler),
server.OnShutdown(
server.Stop("api"), // 1. drain HTTP (in-flight may publish)
server.StopFunc(cleanup), // 2. close publisher + clients
),
).Run(ctx)
```
**Pattern B — cleanup via defer (simpler services):**
```go
app, cleanup := service.NewApplication(ctx)
defer cleanup() // runs after Run() returns
server.New(
server.WithHTTPHandler("api", createHandler),
server.OnShutdown(
server.Stop("api"),
),
).Run(ctx)
// cleanup() runs here via defer — publisher closes after server drained
```
**Check procedure:**
1. If `service/application.go` creates a publisher, verify close is either in `OnShutdown` or in the cleanup function
2. Publisher close MUST happen *after* all transports that might publish are stopped
3. Closing publisher before draining HTTP/gRPC = lost messages
**Wrong:**
```go
// main.go — VIOLATION: publisher lifecycle in main, not ordered
func main() {
pub, closePub, _ := client.NewWatermillPublisher()
defer closePub() // VIOLATION: may close before HTTP drains
app := service.NewApplication(ctx, pub) // VIOLATION: infra detail leaked
}
```
---
## WM-09: Named Components Replace SERVER_TO_RUN Switch (INFO)
With the unified server pattern (ARCH-08), the `SERVER_TO_RUN` environment variable switch is replaced by composing `With*` options. A service that needs HTTP + Watermill simply registers both.
**Correct — unified server:**
```go
// All transports in one process, explicit shutdown order
server.New(
server.WithWatermillRouter("events", func(r *message.Router, sub message.Subscriber) {
ports.RegisterEventHandlers(r, sub, app)
}),
server.WithHTTPHandler("api", func(router chi.Router) http.Handler {
return ports.HandlerFromMux(ports.NewHttpServer(app), router)
}),
server.OnShutdown(
server.Stop("events"),
server.Stop("api"),
server.StopFunc(cleanup),
),
).Run(ctx)
```
**Also acceptable — SERVER_TO_RUN for single-transport deployments:**
```go
// When deploying each transport as a separate container
switch serverType {
case "http":
server.New(
server.WithHTTPHandler("api", createHandler),
server.OnShutdown(server.Stop("api")),
).Run(ctx)
case "watermill":
server.New(
server.WithWatermillRouter("events", configureRouter),
server.OnShutdown(server.Stop("events")),
).Run(ctx)
}
```
---
## WM-10: No Synchronous Side Effects Replaced by Fire-and-Forget (CRITICAL)
When replacing synchronous gRPC calls with async events, you MUST ensure the operation tolerates eventual consistency. If the caller needs confirmation that the action succeeded, keep it synchronous (gRPC) or use a saga/process manager — do NOT simply drop the response.
**Check procedure:**
1. For each gRPC adapter being replaced by events, check if the calling command inspects the return value or error
2. If the command makes decisions based on the result, it MUST remain synchronous or use a compensation pattern
3. Fire-and-forget is only valid for notifications, projections, and truly independent side effects
**Correct use of async:**
```go
// Notification — caller doesn't need the result
func (h handler) Handle(ctx context.Context, cmd ScheduleTraining) error {
// ... create training ...
// Fire event — consumer will send email, update dashboard, etc.
return h.eventPublisher.TrainingScheduled(ctx, training)
}
```
**Wrong use of async:**
```go
// VIOLATION: caller needs confirmation that hours were reserved
func (h handler) Handle(ctx context.Context, cmd ScheduleTraining) error {
training, _ := training.NewTraining(...)
h.eventPublisher.TrainingScheduled(ctx, training) // VIOLATION: no guarantee hours are available
return h.repo.Save(ctx, training) // saved training without confirmed availability
}
// Previously this was a synchronous gRPC call that could fail and roll back
```

View File

@ -0,0 +1,99 @@
# Event Handler Scaffold Template
Generate a Watermill event handler port and its registration function. Event handlers are inbound adapters — they live in `ports/` and delegate to CQRS command/query handlers, identical to HTTP and gRPC handlers.
## Placeholders
- `{{Name}}` — PascalCase event name (e.g., `TrainingScheduled`)
- `{{name}}` — camelCase (e.g., `trainingScheduled`)
- `{{name_snake}}` — snake_case (e.g., `training_scheduled`)
- `{{topic}}` — Dot-notation topic name (e.g., `training.scheduled`)
- `{{module}}` — Go module path from go.mod
- `{{command}}` — Command to invoke, PascalCase (e.g., `ScheduleTraining`)
## File: `ports/event.go`
If this file already exists, append the handler method and registration line. If not, create it:
```go
package ports
import (
"encoding/json"
"github.com/ThreeDotsLabs/watermill/message"
"{{module}}/app"
"{{module}}/app/command"
)
type EventHandlers struct {
app app.Application
}
func RegisterEventHandlers(r *message.Router, sub message.Subscriber, application app.Application) {
handlers := EventHandlers{app: application}
r.AddNoPublisherHandler(
"On{{Name}}",
"{{topic}}",
sub,
handlers.On{{Name}},
)
// TODO: Register additional event handlers here
}
// {{Name}}Event is the event payload DTO — protocol-specific, not a domain object.
type {{Name}}Event struct {
// TODO: Add event fields matching the publisher's payload
// Example:
// UUID string `json:"uuid"`
// Hour time.Time `json:"hour"`
}
func (h EventHandlers) On{{Name}}(msg *message.Message) error {
var event {{Name}}Event
if err := json.Unmarshal(msg.Payload, &event); err != nil {
return err
}
// TODO: Construct command and delegate to app layer
// return h.app.Commands.{{command}}.Handle(msg.Context(), command.{{command}}{
// // Map event fields to command fields
// })
return nil
}
```
## Update `main.go`
Add `WithWatermillRouter` to the unified server and include it in `OnShutdown`:
```go
server.New(
server.WithWatermillRouter("events", func(r *message.Router, sub message.Subscriber) {
ports.RegisterEventHandlers(r, sub, application)
}),
server.WithHTTPHandler("api", func(router chi.Router) http.Handler {
return ports.HandlerFromMux(ports.NewHttpServer(application), router)
}),
server.OnShutdown(
server.Stop("events"), // 1. stop consuming first
server.Stop("api"), // 2. then drain HTTP
server.StopFunc(cleanup), // 3. then close clients
),
).Run(ctx)
```
## Update `docker-compose.yml`
Add `AMQP_URI` to the service environment (no separate container needed — all transports run in one process):
```yaml
{{service}}:
environment:
AMQP_URI: amqp://guest:guest@rabbitmq:5672/
depends_on:
- rabbitmq
```

View File

@ -0,0 +1,128 @@
# Event Publisher Adapter Scaffold Template
Generate a Watermill publisher adapter that implements a domain/app-layer interface. The adapter lives in `adapters/` and translates domain operations into published messages. The interface lives in `app/command/services.go`.
## Placeholders
- `{{Name}}` — PascalCase aggregate name (e.g., `Training`)
- `{{name}}` — camelCase (e.g., `training`)
- `{{name_snake}}` — snake_case (e.g., `training`)
- `{{name_lower}}` — all lowercase (e.g., `training`)
- `{{module}}` — Go module path from go.mod
- `{{event}}` — PascalCase first event name (e.g., `TrainingScheduled`)
- `{{topic}}` — Dot-notation topic (e.g., `training.scheduled`)
## File 1: `app/command/services.go`
If this file already exists, add the interface. Otherwise create it:
```go
package command
import "context"
// {{Name}}EventPublisher defines events that can be emitted for {{name_lower}} operations.
// Implemented by adapters (e.g., Watermill AMQP adapter).
type {{Name}}EventPublisher interface {
{{event}}(ctx context.Context) error
// TODO: Add more event methods as needed
// Example:
// {{Name}}Cancelled(ctx context.Context, uuid string) error
}
```
## File 2: `adapters/{{name_snake}}_event_publisher.go`
```go
package adapters
import (
"context"
"encoding/json"
"github.com/ThreeDotsLabs/watermill"
"github.com/ThreeDotsLabs/watermill/message"
"github.com/ThreeDotsLabs/watermill/message/router/middleware"
)
type Watermill{{Name}}EventPublisher struct {
pub message.Publisher
}
func NewWatermill{{Name}}EventPublisher(pub message.Publisher) Watermill{{Name}}EventPublisher {
return Watermill{{Name}}EventPublisher{pub: pub}
}
// {{event}}Event is the wire format for the {{topic}} topic.
type {{event}}Event struct {
// TODO: Add event payload fields
// Example:
// UUID string `json:"uuid"`
// Hour time.Time `json:"hour"`
}
func (p Watermill{{Name}}EventPublisher) {{event}}(ctx context.Context) error {
event := {{event}}Event{
// TODO: Map domain data to event fields
}
payload, err := json.Marshal(event)
if err != nil {
return err
}
msg := message.NewMessage(watermill.NewUUID(), payload)
middleware.SetCorrelationID(watermill.NewUUID(), msg)
return p.pub.Publish("{{topic}}", msg)
}
```
## Update `service/application.go`
Wire the publisher adapter in the composition root:
```go
func NewApplication(ctx context.Context) (app.Application, func()) {
// ... existing clients ...
publisher, closePub, err := client.NewWatermillPublisher()
if err != nil { panic(err) }
eventPublisher := adapters.NewWatermill{{Name}}EventPublisher(publisher)
return newApplication(ctx, eventPublisher),
func() {
// ... existing cleanup ...
_ = closePub()
}
}
```
Update the private `newApplication` to accept the publisher interface:
```go
func newApplication(
ctx context.Context,
eventPublisher command.{{Name}}EventPublisher,
// ... existing deps ...
) app.Application {
// ... pass eventPublisher to command handlers that need it
}
```
## Update command handler
Inject the publisher into the command handler that triggers the event:
```go
type {{name}}Handler struct {
{{name_lower}}Repo {{name_lower}}.Repository
eventPublisher command.{{Name}}EventPublisher
}
func (h {{name}}Handler) Handle(ctx context.Context, cmd {{command}}) error {
// ... domain logic ...
return h.eventPublisher.{{event}}(ctx)
}
```

View File

@ -132,7 +132,40 @@ func NewHttpServer(application app.Application) HttpServer {
} }
``` ```
### 8. `adapters/memory_{{name_snake}}_repository.go` ### 8. `main.go`
```go
package main
import (
"context"
"net/http"
"{{module_common}}/logs"
"{{module_common}}/server"
"{{module}}/ports"
"{{module}}/service"
"github.com/go-chi/chi/v5"
)
func main() {
logs.Init()
ctx := context.Background()
app := service.NewApplication(ctx)
server.New(
server.WithHTTPHandler("api", func(router chi.Router) http.Handler {
return ports.HandlerFromMux(ports.NewHttpServer(app), router)
}),
server.OnShutdown(
server.Stop("api"),
),
).Run(ctx)
}
```
### 9. `adapters/memory_{{name_snake}}_repository.go`
```go ```go
package adapters package adapters
@ -190,7 +223,7 @@ func (r *Memory{{Name}}Repository) Update{{Name}}(
} }
``` ```
### 9. `service/application.go` ### 10. `service/application.go`
```go ```go
package service package service
@ -217,7 +250,9 @@ func NewApplication(ctx context.Context) app.Application {
After creating the service skeleton: After creating the service skeleton:
1. Add your first command with `/threedots scaffold command <ActionName>` 1. Ensure unified server exists: `/threedots scaffold unified_server`
2. Add your first query with `/threedots scaffold query <QueryName>` 2. Add your first command with `/threedots scaffold command <ActionName>`
3. Wire them in `service/application.go` 3. Add your first query with `/threedots scaffold query <QueryName>`
4. Add HTTP/gRPC handlers in `ports/` 4. Wire them in `service/application.go`
5. Add HTTP/gRPC handlers in `ports/`
6. When adding Watermill: `/threedots scaffold watermill_router` then `/threedots scaffold event_handler <Name>`

297
templates/unified_server.md Normal file
View File

@ -0,0 +1,297 @@
# Unified Server Scaffold Template
Generate the core unified server infrastructure in `internal/common/server/`. This replaces the standalone `RunHTTPServer` / `RunGRPCServer` functions with a composable `server.New(...).Run(ctx)` pattern that supports multiple transports with explicit shutdown ordering.
Created once per project. Individual transports (`WithWatermillRouter`) can be added later.
## Placeholders
- `{{module_common}}` — Go module path to `internal/common` (e.g., `github.com/example/myproject/internal/common`)
## File 1: `internal/common/server/server.go`
```go
package server
import (
"context"
"os/signal"
"sort"
"sync"
"syscall"
"time"
"github.com/sirupsen/logrus"
)
type Server struct {
components map[string]component
startOrder []string
shutdownSteps []ShutdownStep
}
type component struct {
name string
start func(ctx context.Context) error
stop func(ctx context.Context) error
}
type Option func(*Server)
func New(opts ...Option) *Server {
s := &Server{
components: make(map[string]component),
}
for _, opt := range opts {
opt(s)
}
return s
}
func (s *Server) addComponent(name string, c component) {
if _, exists := s.components[name]; exists {
panic("duplicate component name: " + name)
}
s.components[name] = c
s.startOrder = append(s.startOrder, name)
}
func (s *Server) Run(ctx context.Context) error {
ctx, stop := signal.NotifyContext(ctx, syscall.SIGINT, syscall.SIGTERM)
defer stop()
errCh := make(chan error, len(s.components))
for _, name := range s.startOrder {
c := s.components[name]
go func(c component) {
logrus.WithField("component", c.name).Info("Starting")
if err := c.start(ctx); err != nil {
errCh <- err
}
}(c)
}
select {
case <-ctx.Done():
logrus.Info("Shutdown signal received")
case err := <-errCh:
logrus.WithError(err).Error("Component failed, initiating shutdown")
}
s.executeShutdown()
return nil
}
func (s *Server) executeShutdown() {
shutdownCtx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
defer cancel()
stopped := map[string]bool{}
for _, step := range s.shutdownSteps {
if step.fn != nil {
logrus.Info("Running shutdown func")
if err := step.fn(shutdownCtx); err != nil {
logrus.WithError(err).Error("Shutdown func failed")
}
continue
}
var wg sync.WaitGroup
for _, name := range step.componentNames {
c, ok := s.components[name]
if !ok {
logrus.WithField("component", name).Warn("Unknown component in OnShutdown")
continue
}
stopped[name] = true
wg.Add(1)
go func(c component) {
defer wg.Done()
logrus.WithField("component", c.name).Info("Stopping")
if err := c.stop(shutdownCtx); err != nil {
logrus.WithError(err).WithField("component", c.name).Error("Stop failed")
}
}(c)
}
wg.Wait()
}
// Safety net: stop any components not mentioned in OnShutdown
var wg sync.WaitGroup
for name, c := range s.components {
if stopped[name] {
continue
}
wg.Add(1)
go func(c component) {
defer wg.Done()
logrus.WithField("component", c.name).Warn("Stopping (not in OnShutdown — add it)")
if err := c.stop(shutdownCtx); err != nil {
logrus.WithError(err).WithField("component", c.name).Error("Stop failed")
}
}(c)
}
wg.Wait()
}
```
## File 2: `internal/common/server/shutdown.go`
```go
package server
import "context"
// ShutdownStep is one step in the shutdown sequence.
type ShutdownStep struct {
componentNames []string
fn func(ctx context.Context) error
}
// Stop creates a shutdown step that stops named components.
// Multiple names in one call = parallel shutdown within the step.
func Stop(names ...string) ShutdownStep {
return ShutdownStep{componentNames: names}
}
// StopFunc creates a shutdown step that runs an arbitrary cleanup function.
func StopFunc(fn func()) ShutdownStep {
return ShutdownStep{
fn: func(ctx context.Context) error {
fn()
return nil
},
}
}
// StopFuncWithErr creates a shutdown step with error return.
func StopFuncWithErr(fn func(ctx context.Context) error) ShutdownStep {
return ShutdownStep{fn: fn}
}
// OnShutdown declares the shutdown sequence.
// Steps execute top-to-bottom. Each step completes before the next starts.
// Components not mentioned are stopped last with a warning.
func OnShutdown(steps ...ShutdownStep) Option {
return func(s *Server) {
s.shutdownSteps = steps
}
}
```
## File 3: `internal/common/server/http.go` (replace existing)
```go
package server
import (
"context"
"net/http"
"os"
"{{module_common}}/auth"
"{{module_common}}/logs"
"github.com/go-chi/chi/v5"
"github.com/go-chi/chi/v5/middleware"
"github.com/go-chi/cors"
"github.com/sirupsen/logrus"
)
func WithHTTPHandler(name string, createHandler func(chi.Router) http.Handler) Option {
return func(s *Server) {
addr := ":" + os.Getenv("PORT")
srv := &http.Server{Addr: addr}
s.addComponent(name, component{
name: name,
start: func(ctx context.Context) error {
apiRouter := chi.NewRouter()
setMiddlewares(apiRouter)
rootRouter := chi.NewRouter()
rootRouter.Mount("/api", createHandler(apiRouter))
srv.Handler = rootRouter
logrus.WithField("addr", addr).Info("Starting HTTP server")
if err := srv.ListenAndServe(); err != http.ErrServerClosed {
return err
}
return nil
},
stop: func(ctx context.Context) error {
return srv.Shutdown(ctx)
},
})
}
}
// setMiddlewares, addAuthMiddleware, addCorsMiddleware — same as existing
```
## File 4: `internal/common/server/grpc.go` (replace existing)
```go
package server
import (
"context"
"net"
"os"
"{{module_common}}/logs"
grpc_middleware "github.com/grpc-ecosystem/go-grpc-middleware"
grpc_logrus "github.com/grpc-ecosystem/go-grpc-middleware/logging/logrus"
grpc_ctxtags "github.com/grpc-ecosystem/go-grpc-middleware/tags"
"github.com/sirupsen/logrus"
"google.golang.org/grpc"
)
func WithGRPCServer(name string, registerServer func(*grpc.Server)) Option {
return func(s *Server) {
logrusEntry := logrus.NewEntry(logrus.StandardLogger())
grpcSrv := grpc.NewServer(
grpc_middleware.WithUnaryServerChain(
grpc_ctxtags.UnaryServerInterceptor(grpc_ctxtags.WithFieldExtractor(grpc_ctxtags.CodeGenRequestFieldExtractor)),
grpc_logrus.UnaryServerInterceptor(logrusEntry),
),
grpc_middleware.WithStreamServerChain(
grpc_ctxtags.StreamServerInterceptor(grpc_ctxtags.WithFieldExtractor(grpc_ctxtags.CodeGenRequestFieldExtractor)),
grpc_logrus.StreamServerInterceptor(logrusEntry),
),
)
registerServer(grpcSrv)
port := os.Getenv("GRPC_PORT")
if port == "" {
port = "8080"
}
addr := ":" + port
s.addComponent(name, component{
name: name,
start: func(ctx context.Context) error {
lis, err := net.Listen("tcp", addr)
if err != nil {
return err
}
logrus.WithField("addr", addr).Info("Starting gRPC server")
return grpcSrv.Serve(lis)
},
stop: func(ctx context.Context) error {
grpcSrv.GracefulStop()
return nil
},
})
}
}
```
## Post-Creation Instructions
After creating the unified server:
1. Remove or replace the old `RunHTTPServer` / `RunGRPCServer` standalone functions
2. Update all `main.go` files to use `server.New(...).Run(ctx)` with `OnShutdown`
3. Add `/threedots scaffold watermill_router` to add Watermill support
4. Every component MUST appear in `OnShutdown` — the safety net logs warnings for forgotten ones

View File

@ -0,0 +1,116 @@
# Watermill Router Option + Publisher Client Scaffold Template
Generate the `WithWatermillRouter` server option in `internal/common/server/` and the publisher client factory in `internal/common/client/`. Requires the unified server scaffold (`/threedots scaffold unified_server`) to be in place first.
## Placeholders
- `{{module_common}}` — Go module path to `internal/common` (e.g., `github.com/example/myproject/internal/common`)
## File 1: `internal/common/server/watermill.go`
```go
package server
import (
"context"
"os"
"github.com/ThreeDotsLabs/watermill"
"github.com/ThreeDotsLabs/watermill-amqp/v3/pkg/amqp"
"github.com/ThreeDotsLabs/watermill/message"
wmMiddleware "github.com/ThreeDotsLabs/watermill/message/router/middleware"
)
func WithWatermillRouter(
name string,
configure func(*message.Router, message.Subscriber),
) Option {
return func(s *Server) {
wmLogger := watermill.NewStdLoggerWithOut(os.Stdout, true, false)
amqpURI := os.Getenv("AMQP_URI")
if amqpURI == "" {
amqpURI = "amqp://guest:guest@rabbitmq:5672/"
}
amqpConfig := amqp.NewDurableQueueConfig(amqpURI)
sub, err := amqp.NewSubscriber(amqpConfig, wmLogger)
if err != nil {
panic("cannot create watermill subscriber: " + err.Error())
}
r, err := message.NewRouter(message.RouterConfig{}, wmLogger)
if err != nil {
panic("cannot create watermill router: " + err.Error())
}
r.AddMiddleware(
wmMiddleware.CorrelationID,
wmMiddleware.Recoverer,
wmMiddleware.Retry{MaxRetries: 3}.Middleware,
)
configure(r, sub)
s.addComponent(name, component{
name: name,
start: func(ctx context.Context) error {
return r.Run(ctx)
},
stop: func(ctx context.Context) error {
return r.Close()
},
})
}
}
```
## File 2: `internal/common/client/watermill.go`
```go
package client
import (
"os"
"github.com/ThreeDotsLabs/watermill"
"github.com/ThreeDotsLabs/watermill-amqp/v3/pkg/amqp"
"github.com/ThreeDotsLabs/watermill/message"
"github.com/pkg/errors"
)
func NewWatermillPublisher() (pub message.Publisher, close func() error, err error) {
amqpURI := os.Getenv("AMQP_URI")
if amqpURI == "" {
return nil, func() error { return nil }, errors.New("empty env AMQP_URI")
}
logger := watermill.NewStdLoggerWithOut(os.Stdout, true, false)
config := amqp.NewDurableQueueConfig(amqpURI)
publisher, err := amqp.NewPublisher(config, logger)
if err != nil {
return nil, func() error { return nil }, errors.Wrap(err, "cannot create watermill publisher")
}
return publisher, publisher.Close, nil
}
```
## Post-Creation Instructions
After creating the Watermill option and publisher:
1. Add `github.com/ThreeDotsLabs/watermill` and `github.com/ThreeDotsLabs/watermill-amqp/v3` to `go.mod`
2. Add `AMQP_URI` to `.env`, `.test.env`, and `docker-compose.yml`
3. Add a RabbitMQ service to `docker-compose.yml`:
```yaml
rabbitmq:
image: rabbitmq:3-management
ports:
- "5672:5672"
- "15672:15672"
```
4. Use `/threedots scaffold event_handler <Name>` to create event handlers in a service
5. Use `/threedots scaffold event_publisher <Name>` to create a publisher adapter
6. Add `server.WithWatermillRouter("events", ...)` and include `"events"` in `OnShutdown`