Initial commit - Claude Code configuration and project memory
Includes: - Global CLAUDE.md (instructions, git/docker config) - settings.json, statusline config - Plugin registry (non-marketplace) - Project memory files for all active projects - Plans Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
This commit is contained in:
commit
e2b4ce77dd
|
|
@ -0,0 +1,37 @@
|
|||
# === SENSITIVE - never commit ===
|
||||
.credentials.json
|
||||
|
||||
# === SESSION HISTORY - large, transient ===
|
||||
*.jsonl
|
||||
sessions-index.json
|
||||
|
||||
# Session UUID directories
|
||||
projects/*/[0-9a-f][0-9a-f][0-9a-f][0-9a-f][0-9a-f][0-9a-f][0-9a-f][0-9a-f]-[0-9a-f]*/
|
||||
|
||||
# === CACHE & TRANSIENT DATA ===
|
||||
cache/
|
||||
debug/
|
||||
file-history/
|
||||
paste-cache/
|
||||
shell-snapshots/
|
||||
stats-cache.json
|
||||
statsig/
|
||||
telemetry/
|
||||
mcp-needs-auth-cache.json
|
||||
backups/
|
||||
ide/
|
||||
|
||||
# === SESSION TASK/TODO TRACKERS (ephemeral per-session) ===
|
||||
tasks/
|
||||
todos/
|
||||
|
||||
# === PLUGINS - marketplace plugins are external repos, re-downloadable ===
|
||||
plugins/marketplaces/
|
||||
|
||||
# === WHAT IS TRACKED ===
|
||||
# CLAUDE.md - global config/instructions
|
||||
# settings.json - global settings
|
||||
# statusline-command.sh - statusline config
|
||||
# plugins/ - installed plugins
|
||||
# projects/*/memory/** - project memory files
|
||||
# plans/, tasks/, todos/ - persisted work items
|
||||
|
|
@ -0,0 +1,3 @@
|
|||
{
|
||||
"claudeCode.useTerminal": true
|
||||
}
|
||||
|
|
@ -0,0 +1,17 @@
|
|||
# Global Claude Configuration
|
||||
|
||||
## Git Hosting
|
||||
- Using **Gitea** for Git hosting (not GitHub/GitLab)
|
||||
- Use `gh` CLI alternatives or direct git commands where appropriate
|
||||
- CI/CD workflows use `.gitea/workflows/` directory
|
||||
|
||||
## Docker Registry
|
||||
- Docker registry URL: **192.168.200.200:5000**
|
||||
- All HiveOps repositories use this registry for image storage
|
||||
- This applies to all `.env` files, build scripts, and documentation
|
||||
|
||||
## Project Memory
|
||||
- Every project Claude is run under must have a memory folder created at the start of the session
|
||||
- Memory path: `~/.claude/projects/{project-path-slug}/memory/MEMORY.md`
|
||||
- If the memory folder/file does not exist, create it before doing any work
|
||||
- Save key architectural decisions, file locations, bugs fixed, workflows, and user preferences
|
||||
|
|
@ -0,0 +1,486 @@
|
|||
# Application Log Error Monitoring Module - Implementation Plan
|
||||
|
||||
## Context
|
||||
|
||||
The user needs to monitor ATM application log files (APLog*.log) for ERROR-level entries and report them to the hiveops-incident backend. Additionally, errors should be correlated with transactions when possible to provide context for troubleshooting.
|
||||
|
||||
**Why this is needed**: Application errors often indicate software issues that can lead to transaction failures or service degradation. By capturing and reporting these errors to the incident management system, operators can proactively identify and resolve problems before they impact customers.
|
||||
|
||||
**Current state**: The system already monitors journal files for hardware events (card reader failures, cassette issues, etc.) via the journal-events module. However, application-level software errors are not captured.
|
||||
|
||||
**File encodings**:
|
||||
- **Application logs** (`examples/20260215_APP/APLog20260215.log`): UTF-8/ASCII encoding, straightforward to parse
|
||||
- **Device journals** (`examples/20260215_EJ/ej_BP000125_20260215.txt`): UTF-16 Little Endian with BOM, requires UTF16LEReader
|
||||
- **Server journals** (`examples/20260215_EJ/20260215.jrn`): UTF-8 (server-processed), not used by agent
|
||||
|
||||
**Important path structure**:
|
||||
- Application log files are organized in **date-based subdirectories**: `d:\MoniPlus2SLog\20260215\APLog20260215.log`
|
||||
- Each day creates a new folder (e.g., `20260215`, `20260216`, etc.)
|
||||
- The AppLogSource must navigate to the correct date folder to find the current log file
|
||||
|
||||
## Approach
|
||||
|
||||
Create a new module called **app-log-events** within the existing `hiveops-journal` Maven module. This module will:
|
||||
|
||||
1. Monitor application log files for ERROR entries
|
||||
2. Parse and categorize errors by type (card, dispenser, network, encryption, etc.)
|
||||
3. Optionally correlate errors with transactions using timestamp proximity
|
||||
4. Send structured error events to the atm-incident backend
|
||||
5. Track file position to avoid reprocessing on restart
|
||||
|
||||
**Design decision**: Build within the existing hiveops-journal module (rather than creating a new Maven module) because it shares the same destination backend, HTTP client infrastructure, and conceptual domain (time-series ATM operational data).
|
||||
|
||||
## Implementation Steps
|
||||
|
||||
### 1. Add New Event Types (Agent Side)
|
||||
|
||||
**File**: `/source/hiveops-src/hiveops-agent/hiveops-journal/src/main/java/com/hiveops/events/EventType.java`
|
||||
|
||||
Add application error event types after line 47:
|
||||
|
||||
```java
|
||||
// Application Error Events
|
||||
APPLICATION_ERROR, // Generic application error
|
||||
APPLICATION_ERROR_CARD, // Card-related application error
|
||||
APPLICATION_ERROR_DISPENSER, // Dispenser-related error
|
||||
APPLICATION_ERROR_NETWORK, // Network-related error
|
||||
APPLICATION_ERROR_ENCRYPTION, // Encryption/security error
|
||||
APPLICATION_ERROR_JOURNAL // Journal upload error
|
||||
```
|
||||
|
||||
### 2. Create AppLogSource Class
|
||||
|
||||
**File**: `/source/hiveops-src/hiveops-agent/hiveops-journal/src/main/java/com/hiveops/applogs/AppLogSource.java` (new)
|
||||
|
||||
Similar to `JournalSource` but specialized for application logs with **date-based subdirectories**:
|
||||
|
||||
```java
|
||||
package com.hiveops.applogs;
|
||||
|
||||
import java.io.File;
|
||||
import java.time.LocalDate;
|
||||
import java.time.format.DateTimeFormatter;
|
||||
|
||||
public class AppLogSource {
|
||||
private final String baseDir; // e.g., "d:\MoniPlus2SLog"
|
||||
private final String filenameFormat; // "APLog{YYYY}{MM}{DD}.log"
|
||||
private final String atmName;
|
||||
|
||||
public AppLogSource(String baseDir, String filenameFormat, String atmName) {
|
||||
this.baseDir = baseDir;
|
||||
this.filenameFormat = filenameFormat;
|
||||
this.atmName = atmName;
|
||||
}
|
||||
|
||||
public File getCurrentLogFile() {
|
||||
String date = LocalDate.now().format(DateTimeFormatter.ofPattern("yyyyMMdd"));
|
||||
|
||||
// Application logs are in date-based subdirectories
|
||||
// e.g., d:\MoniPlus2SLog\20260215\APLog20260215.log
|
||||
File dateDir = new File(baseDir, date);
|
||||
|
||||
String filename = filenameFormat
|
||||
.replace("{YYYY}", date.substring(0, 4))
|
||||
.replace("{MM}", date.substring(4, 6))
|
||||
.replace("{DD}", date.substring(6, 8));
|
||||
|
||||
return new File(dateDir, filename);
|
||||
}
|
||||
|
||||
// Getters...
|
||||
}
|
||||
```
|
||||
|
||||
**Pattern**: Extends JournalSource pattern with date-based subdirectory navigation.
|
||||
|
||||
### 3. Create AppLogParser Interface and Implementation
|
||||
|
||||
**File**: `/source/hiveops-src/hiveops-agent/hiveops-journal/src/main/java/com/hiveops/applogs/AppLogParser.java` (new)
|
||||
|
||||
```java
|
||||
package com.hiveops.applogs;
|
||||
|
||||
import com.hiveops.events.dto.CreateJournalEventRequest;
|
||||
import java.util.List;
|
||||
|
||||
public interface AppLogParser {
|
||||
List<CreateJournalEventRequest> parseLine(String line, String agentAtmId);
|
||||
}
|
||||
```
|
||||
|
||||
**File**: `/source/hiveops-src/hiveops-agent/hiveops-journal/src/main/java/com/hiveops/applogs/SimpleAppLogParser.java` (new)
|
||||
|
||||
Key responsibilities:
|
||||
- Parse log line format: `ERROR [YYYY-MM-DD HH:MM:SS-mmm] [Class.Method] Message`
|
||||
- Extract timestamp, class/method, and message
|
||||
- Categorize error using regex patterns (configurable via properties)
|
||||
- Build CreateJournalEventRequest with eventSource="HIVEOPS_AGENT_APPLOG"
|
||||
|
||||
**Pattern to reuse**: Follow SimpleJournalEventParser.java structure with regex-based pattern matching and configurable patterns via properties.
|
||||
|
||||
**Example categorization**:
|
||||
```java
|
||||
private EventType categorizeError(String className, String method, String message) {
|
||||
String combined = (className + "." + method + " " + message).toLowerCase();
|
||||
|
||||
if (cardErrorPattern.matcher(combined).find())
|
||||
return EventType.APPLICATION_ERROR_CARD;
|
||||
if (dispenserErrorPattern.matcher(combined).find())
|
||||
return EventType.APPLICATION_ERROR_DISPENSER;
|
||||
if (networkErrorPattern.matcher(combined).find())
|
||||
return EventType.APPLICATION_ERROR_NETWORK;
|
||||
if (encryptionErrorPattern.matcher(combined).find())
|
||||
return EventType.APPLICATION_ERROR_ENCRYPTION;
|
||||
if (journalErrorPattern.matcher(combined).find())
|
||||
return EventType.APPLICATION_ERROR_JOURNAL;
|
||||
|
||||
return EventType.APPLICATION_ERROR; // default
|
||||
}
|
||||
```
|
||||
|
||||
**Default patterns** (configurable via properties):
|
||||
- Card: `cardreader|idc|chip.*error|card.*(fail|jam|stuck)`
|
||||
- Dispenser: `cashdispenser|brm|dispens.*error|cash.*jam`
|
||||
- Network: `network|connection|socket|tcp.*error`
|
||||
- Encryption: `encrypt|decrypt|certificate|crypto`
|
||||
- Journal: `ejournaluploader|journal.*upload`
|
||||
|
||||
### 4. Create Transaction Correlator (Optional)
|
||||
|
||||
**File**: `/source/hiveops-src/hiveops-agent/hiveops-journal/src/main/java/com/hiveops/applogs/TransactionCorrelator.java` (new)
|
||||
|
||||
Only create this if `applog.events.correlation.enabled=true`. Responsibilities:
|
||||
- Load recent transactions from **device journal files** (UTF-16LE encoded, e.g., `ej_BP000125_20260215.txt`)
|
||||
- Maintain in-memory timeline using circular buffer
|
||||
- Find nearest transaction for a given error timestamp (within 30-second window)
|
||||
- Enrich event details with transaction context
|
||||
|
||||
**Important**: Device journal files use UTF-16 Little Endian encoding with BOM (`ff fe`). Use the existing `UTF16LEReader` class from `/source/hiveops-src/hiveops-agent/hiveops-core/src/main/java/com/hiveops/http/UTF16LEReader.java` to read these files properly.
|
||||
|
||||
**Data structure**:
|
||||
```java
|
||||
class TransactionContext {
|
||||
LocalDateTime timestamp;
|
||||
String sequenceNumber;
|
||||
String transactionType;
|
||||
}
|
||||
```
|
||||
|
||||
**Pattern**: Use Apache Commons CircularFifoQueue for memory-efficient transaction history.
|
||||
|
||||
**Journal parsing strategy**:
|
||||
```java
|
||||
// Read device journal file with UTF-16LE encoding
|
||||
JournalSource journalSource = findJournalSource(context);
|
||||
File journalFile = journalSource.getCurrentJournalFile();
|
||||
UTF16LEReader reader = new UTF16LEReader();
|
||||
|
||||
// Parse lines for transaction markers
|
||||
Pattern txnStartPattern = Pattern.compile("\\[.*?\\]TRANSACTION START");
|
||||
Pattern txnSeqPattern = Pattern.compile("Trans SEQ Number \\[(\\d+)\\]");
|
||||
```
|
||||
|
||||
**Enrichment example**:
|
||||
```
|
||||
Original: "ERROR [2026-02-15 00:02:37-678] [Encryption.EncryptString] Error found while encrypting"
|
||||
Enriched: "[Transaction 4727 @ 2026-02-15T00:02:10] ERROR [2026-02-15 00:02:37-678] [Encryption.EncryptString] Error found while encrypting"
|
||||
```
|
||||
|
||||
### 5. Create AppLogEventProcessor
|
||||
|
||||
**File**: `/source/hiveops-src/hiveops-agent/hiveops-journal/src/main/java/com/hiveops/applogs/AppLogEventProcessor.java` (new)
|
||||
|
||||
Background thread that:
|
||||
1. Monitors app log file for changes (polling-based, similar to JournalEventProcessor)
|
||||
2. Reads new content from last byte offset using RandomAccessFile
|
||||
3. Parses ERROR lines using AppLogParser
|
||||
4. Optionally correlates with transactions
|
||||
5. Batches events and sends via IncidentEventClient
|
||||
6. Persists position to survive restarts
|
||||
|
||||
**Position file**: `{applog.dir}/{atmName}-applogs.position`
|
||||
```properties
|
||||
filename=APLog20260216.log
|
||||
position=1245678
|
||||
lastProcessedTime=2026-02-16T10:35:22.456
|
||||
```
|
||||
|
||||
**Pattern to reuse**: Copy the structure of JournalEventProcessor.java, adapting for line-by-line processing instead of chunk uploading.
|
||||
|
||||
**Key loop structure**:
|
||||
```java
|
||||
while (running) {
|
||||
File currentFile = source.getCurrentLogFile();
|
||||
ProcessingState state = loadState();
|
||||
|
||||
// Read new content from last position
|
||||
List<String> newLines = readNewLines(currentFile, state.position);
|
||||
List<CreateJournalEventRequest> events = new ArrayList<>();
|
||||
|
||||
for (String line : newLines) {
|
||||
events.addAll(parser.parseLine(line, atmName));
|
||||
}
|
||||
|
||||
// Send batch
|
||||
if (!events.isEmpty()) {
|
||||
eventClient.sendEvents(events);
|
||||
saveState(state);
|
||||
}
|
||||
|
||||
Thread.sleep(recheckDelayMs);
|
||||
}
|
||||
```
|
||||
|
||||
### 6. Create AppLogEventModule
|
||||
|
||||
**File**: `/source/hiveops-src/hiveops-agent/hiveops-journal/src/main/java/com/hiveops/applogs/AppLogEventModule.java` (new)
|
||||
|
||||
AgentModule implementation for lifecycle management:
|
||||
|
||||
```java
|
||||
package com.hiveops.applogs;
|
||||
|
||||
import com.hiveops.core.module.AgentModule;
|
||||
import com.hiveops.core.module.ModuleContext;
|
||||
import com.hiveops.core.module.ModuleInitializationException;
|
||||
import com.hiveops.events.IncidentEventClient;
|
||||
import com.hiveops.http.HttpClientSettings;
|
||||
|
||||
public class AppLogEventModule implements AgentModule {
|
||||
private AppLogEventProcessor processor;
|
||||
private Thread thread;
|
||||
|
||||
@Override
|
||||
public String getName() { return "app-log-events"; }
|
||||
|
||||
@Override
|
||||
public String getVersion() { return "1.0.0"; }
|
||||
|
||||
@Override
|
||||
public List<String> getDependencies() {
|
||||
// Depend on journal-upload for transaction correlation
|
||||
return Arrays.asList("journal-upload", "journal-events");
|
||||
}
|
||||
|
||||
@Override
|
||||
public void initialize(ModuleContext context) throws ModuleInitializationException {
|
||||
Properties props = context.getMainProperties();
|
||||
|
||||
// Check if enabled
|
||||
boolean enabled = Boolean.parseBoolean(
|
||||
props.getProperty("applog.events.enabled", "true"));
|
||||
String incidentEndpoint = props.getProperty("incident.endpoint");
|
||||
|
||||
if (!enabled || incidentEndpoint == null) {
|
||||
return; // isEnabled() will return false
|
||||
}
|
||||
|
||||
// Load configuration
|
||||
String logDir = props.getProperty("applog.events.dir");
|
||||
String filenameFormat = props.getProperty("applog.events.filename.format",
|
||||
"APLog{YYYY}{MM}{DD}.log");
|
||||
|
||||
// Create components
|
||||
AppLogSource source = new AppLogSource(logDir, filenameFormat,
|
||||
context.getAtmName());
|
||||
|
||||
HttpClientSettings settings = new HttpClientSettings();
|
||||
settings.setEndpoint(incidentEndpoint);
|
||||
IncidentEventClient client = new IncidentEventClient(settings,
|
||||
context.getAtmName(),
|
||||
context.getCountry());
|
||||
|
||||
AppLogParser parser = new SimpleAppLogParser(props);
|
||||
|
||||
// Optional: create correlator if enabled
|
||||
TransactionCorrelator correlator = null;
|
||||
if (Boolean.parseBoolean(props.getProperty("applog.events.correlation.enabled", "false"))) {
|
||||
JournalSource journalSource = findJournalSource(context);
|
||||
if (journalSource != null) {
|
||||
correlator = new TransactionCorrelator(journalSource, 30, 1000);
|
||||
}
|
||||
}
|
||||
|
||||
long recheckDelay = Long.parseLong(
|
||||
props.getProperty("applog.events.recheck.delay.msec", "5000"));
|
||||
int batchSize = Integer.parseInt(
|
||||
props.getProperty("applog.events.batch.size", "50"));
|
||||
|
||||
processor = new AppLogEventProcessor(source, client, parser, correlator,
|
||||
recheckDelay, batchSize,
|
||||
context.getAtmName());
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean isEnabled(ModuleContext context) {
|
||||
String endpoint = context.getMainProperties().getProperty("incident.endpoint");
|
||||
boolean enabled = Boolean.parseBoolean(
|
||||
context.getMainProperties().getProperty("applog.events.enabled", "true"));
|
||||
return endpoint != null && enabled && processor != null;
|
||||
}
|
||||
|
||||
@Override
|
||||
public void start() {
|
||||
if (processor != null) {
|
||||
thread = new Thread(processor, "app-log-events");
|
||||
thread.start();
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
public void stop() {
|
||||
if (processor != null) processor.stop();
|
||||
if (thread != null) thread.interrupt();
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Pattern**: Follow the exact structure of JournalEventModule.java.
|
||||
|
||||
### 7. Register Module via ServiceLoader
|
||||
|
||||
**File**: `/source/hiveops-src/hiveops-agent/hiveops-journal/src/main/resources/META-INF/services/com.hiveops.core.module.AgentModule`
|
||||
|
||||
Add the new module to the existing file:
|
||||
|
||||
```
|
||||
com.hiveops.journals.JournalUploadModule
|
||||
com.hiveops.events.JournalEventModule
|
||||
com.hiveops.applogs.AppLogEventModule
|
||||
```
|
||||
|
||||
### 8. Add Configuration Properties
|
||||
|
||||
**File**: `/source/hiveops-src/hiveops-agent/hiveops-app/src/main/resources/hiveops.properties`
|
||||
|
||||
Add configuration section:
|
||||
|
||||
```properties
|
||||
# ATM Incident integration endpoint
|
||||
incident.endpoint=https://incident.bcos.cloud
|
||||
|
||||
# Application Log Error Monitoring
|
||||
applog.events.enabled=true
|
||||
applog.events.dir=d:\\MoniPlus2SLog
|
||||
applog.events.filename.format=APLog{YYYY}{MM}{DD}.log
|
||||
applog.events.recheck.delay.msec=5000
|
||||
applog.events.batch.size=50
|
||||
|
||||
# Transaction correlation (optional)
|
||||
applog.events.correlation.enabled=true
|
||||
applog.events.correlation.window.sec=30
|
||||
applog.events.correlation.max.transactions=1000
|
||||
|
||||
# Pattern overrides (optional)
|
||||
#applog.pattern.APPLICATION_ERROR_CARD=cardreader|idc|chip.*error
|
||||
#applog.pattern.APPLICATION_ERROR_DISPENSER=cashdispenser|brm|dispens.*error
|
||||
```
|
||||
|
||||
**Note**: `applog.events.dir` is the base directory. The module automatically navigates to date-based subdirectories (e.g., `d:\MoniPlus2SLog\20260215\`)
|
||||
|
||||
### 9. Write Tests
|
||||
|
||||
**File**: `/source/hiveops-src/hiveops-agent/hiveops-journal/src/test/java/com/hiveops/applogs/SimpleAppLogParserTest.java` (new)
|
||||
|
||||
Test cases:
|
||||
- Parse ERROR line with valid format
|
||||
- Extract timestamp, class, method, message
|
||||
- Categorize errors by pattern (card, dispenser, network, etc.)
|
||||
- Handle malformed lines gracefully
|
||||
- Test custom pattern configuration
|
||||
- Test multiple ERROR types in different lines
|
||||
|
||||
**Test data** (use the example files in `examples/20260215_APP/`):
|
||||
```java
|
||||
@Test
|
||||
public void testParseCardError() {
|
||||
String line = "ERROR [2026-02-15 00:04:36-165] [CardReadState.OnAsyncCmdCompMsg] Card Accepting was failed with ERROR";
|
||||
List<CreateJournalEventRequest> events = parser.parseLine(line, "DLX001");
|
||||
|
||||
assertEquals(1, events.size());
|
||||
assertEquals("APPLICATION_ERROR_CARD", events.get(0).getEventType());
|
||||
assertEquals("HIVEOPS_AGENT_APPLOG", events.get(0).getEventSource());
|
||||
}
|
||||
```
|
||||
|
||||
**File**: `/source/hiveops-src/hiveops-agent/hiveops-journal/src/test/java/com/hiveops/applogs/TransactionCorrelatorTest.java` (new)
|
||||
|
||||
Test transaction correlation logic (if implementing correlator).
|
||||
|
||||
## Critical Files to Modify/Create
|
||||
|
||||
### New Files (Create):
|
||||
1. `/source/hiveops-src/hiveops-agent/hiveops-journal/src/main/java/com/hiveops/applogs/AppLogSource.java`
|
||||
2. `/source/hiveops-src/hiveops-agent/hiveops-journal/src/main/java/com/hiveops/applogs/AppLogParser.java`
|
||||
3. `/source/hiveops-src/hiveops-agent/hiveops-journal/src/main/java/com/hiveops/applogs/SimpleAppLogParser.java`
|
||||
4. `/source/hiveops-src/hiveops-agent/hiveops-journal/src/main/java/com/hiveops/applogs/AppLogEventProcessor.java`
|
||||
5. `/source/hiveops-src/hiveops-agent/hiveops-journal/src/main/java/com/hiveops/applogs/AppLogEventModule.java`
|
||||
6. `/source/hiveops-src/hiveops-agent/hiveops-journal/src/main/java/com/hiveops/applogs/TransactionCorrelator.java` (optional)
|
||||
7. `/source/hiveops-src/hiveops-agent/hiveops-journal/src/test/java/com/hiveops/applogs/SimpleAppLogParserTest.java`
|
||||
|
||||
### Existing Files to Modify:
|
||||
1. `/source/hiveops-src/hiveops-agent/hiveops-journal/src/main/java/com/hiveops/events/EventType.java` - Add APPLICATION_ERROR_* enum values
|
||||
2. `/source/hiveops-src/hiveops-agent/hiveops-journal/src/main/resources/META-INF/services/com.hiveops.core.module.AgentModule` - Register AppLogEventModule
|
||||
3. `/source/hiveops-src/hiveops-agent/hiveops-app/src/main/resources/hiveops.properties` - Add configuration properties
|
||||
|
||||
## Reusable Existing Functions/Utilities
|
||||
|
||||
1. **IncidentEventClient** (`/source/hiveops-src/hiveops-agent/hiveops-journal/src/main/java/com/hiveops/events/IncidentEventClient.java`) - HTTP client for sending events to atm-incident backend (no changes needed)
|
||||
|
||||
2. **CreateJournalEventRequest** (`/source/hiveops-src/hiveops-agent/hiveops-journal/src/main/java/com/hiveops/events/dto/CreateJournalEventRequest.java`) - DTO for event payloads (already supports all needed fields)
|
||||
|
||||
3. **MonitoredLogFile** (`/source/hiveops-src/hiveops-agent/hiveops-core/src/main/java/com/hiveops/http/MonitoredLogFile.java`) - File monitoring with offset tracking (can be adapted for line-based reading)
|
||||
|
||||
4. **UTF16LEReader** (`/source/hiveops-src/hiveops-agent/hiveops-core/src/main/java/com/hiveops/http/UTF16LEReader.java`) - **CRITICAL**: Use this to read device journal files which are UTF-16LE encoded with BOM. Required for transaction correlation.
|
||||
|
||||
5. **JournalSource** (`/source/hiveops-src/hiveops-agent/hiveops-journal/src/main/java/com/hiveops/journals/JournalSource.java`) - Pattern for AppLogSource (note: device journals are `ej_*.txt` files in UTF-16LE, different from server journals)
|
||||
|
||||
6. **SimpleJournalEventParser** (`/source/hiveops-src/hiveops-agent/hiveops-journal/src/main/java/com/hiveops/events/SimpleJournalEventParser.java`) - Pattern for SimpleAppLogParser
|
||||
|
||||
7. **JournalEventProcessor** (`/source/hiveops-src/hiveops-agent/hiveops-journal/src/main/java/com/hiveops/events/JournalEventProcessor.java`) - Pattern for AppLogEventProcessor
|
||||
|
||||
## Verification Plan
|
||||
|
||||
### Unit Testing
|
||||
```bash
|
||||
# Run tests for the new parser
|
||||
mvn test -Dtest=SimpleAppLogParserTest
|
||||
|
||||
# Run all app-log tests
|
||||
mvn test -pl hiveops-journal -Dtest="com.hiveops.applogs.*"
|
||||
```
|
||||
|
||||
### Integration Testing
|
||||
1. Build the fat JAR: `mvn clean package -DskipTests`
|
||||
2. Create test directory structure:
|
||||
```
|
||||
mkdir -p d:\MoniPlus2SLog\20260215
|
||||
cp examples/20260215_APP/APLog20260215.log d:\MoniPlus2SLog\20260215\
|
||||
```
|
||||
3. Configure `hiveops.properties` with:
|
||||
- `applog.events.enabled=true`
|
||||
- `applog.events.dir=d:\\MoniPlus2SLog`
|
||||
- `incident.endpoint=https://incident.bcos.cloud`
|
||||
4. Run the agent: `java -jar hiveops-app/target/hiveops-*-jar-with-dependencies.jar`
|
||||
5. Verify in logs:
|
||||
- "Started app log event processor thread"
|
||||
- "Processing X bytes from d:\MoniPlus2SLog\20260215\APLog20260215.log"
|
||||
- "Sending Y events to incident backend"
|
||||
6. Check atm-incident backend (incident.bcos.cloud) for received APPLICATION_ERROR_* events
|
||||
7. Verify transaction correlation (if enabled) shows transaction sequence numbers
|
||||
|
||||
### Manual Verification
|
||||
1. Monitor a real application log file with live ERROR entries
|
||||
2. Verify events appear in atm-incident dashboard
|
||||
3. Check that file position persists across agent restarts
|
||||
4. Verify no reprocessing of old errors after restart
|
||||
5. Test log file rotation at midnight (filename changes from APLog20260215.log to APLog20260216.log)
|
||||
|
||||
## Notes
|
||||
|
||||
- The module is disabled by default if `incident.endpoint` is not configured
|
||||
- Transaction correlation is optional and can be disabled via `applog.events.correlation.enabled=false`
|
||||
- Error categorization patterns are configurable via properties for different ATM software versions
|
||||
- The module shares the same incident backend endpoint as journal-events
|
||||
- Position tracking ensures no duplicate error reporting across restarts
|
||||
- File I/O is minimal (only reads new content incrementally)
|
||||
|
|
@ -0,0 +1,95 @@
|
|||
# Database Seed Script: 1000 ATMs with Realistic Test Data
|
||||
|
||||
## Goal
|
||||
Create a comprehensive SQL seed script to populate the hiveops_incident database with ~1000 ATMs and proportional amounts of incidents, journal events, fleet tasks, and related data across all statuses.
|
||||
|
||||
## Output File
|
||||
- `backend/src/main/resources/db/seed-data.sql` — single SQL script using PL/pgSQL DO blocks
|
||||
|
||||
## Approach
|
||||
Use PostgreSQL PL/pgSQL procedural blocks with `generate_series` and `random()` for efficient bulk data generation. The script will:
|
||||
1. Clear existing data (TRUNCATE CASCADE)
|
||||
2. Insert all seed data in a single transaction
|
||||
|
||||
## Data Volumes
|
||||
|
||||
| Table | Count | Notes |
|
||||
|-------|-------|-------|
|
||||
| technicians | 30 | Various specializations, availability |
|
||||
| atms | 1000 | Varied statuses, models, locations across 20 countries |
|
||||
| atm_properties | 1000 | One per ATM, varied agent versions/platforms |
|
||||
| incidents | ~4000 | ~35% OPEN, 20% ASSIGNED, 20% IN_PROGRESS, 15% RESOLVED, 10% CLOSED |
|
||||
| incident_notes | ~4000 | 1-3 notes per non-OPEN incident |
|
||||
| journal_events | ~50000 | All 23 event types, spread across last 30 days |
|
||||
| fleet_artifacts | 12 | Mix of AGENT_JAR and MODULE_JAR |
|
||||
| fleet_tasks | ~2000 | All 6 statuses (PENDING/QUEUED/RUNNING/COMPLETED/FAILED/CANCELLED) |
|
||||
| atm_module_status | ~3000 | 2-4 modules per ATM (subset) |
|
||||
| workflow_transitions | 8 | Standard transitions (same as existing) |
|
||||
| settings | 1 | Default settings |
|
||||
|
||||
## Data Distribution Strategy
|
||||
|
||||
### ATMs (1000)
|
||||
- **Statuses**: OPERATIONAL 70%, MAINTENANCE 15%, DOWN 10%, INACTIVE 5%
|
||||
- **Models**: NCR SelfServ 84, NCR 6695, Hyosung MoniMax 8600, Hyosung MoniMax 8200, Diebold Nixdorf CS 5550, Diebold Opteva 750, Wincor ProCash 2250xe
|
||||
- **Locations**: 20 countries (US, UK, DE, FR, NL, BE, AT, CH, ES, IT, PL, CZ, SE, NO, DK, FI, AU, CA, JP, KR) with realistic city/address per country
|
||||
- **ATM IDs**: `{COUNTRY}-{CITY_CODE}-{SEQ}` pattern (e.g., `US-NYC-001`, `DE-BER-042`)
|
||||
|
||||
### Technicians (30)
|
||||
- Specializations: Card Reader, Network Systems, Cash Handling, Hardware, Software, Security
|
||||
- Availability: AVAILABLE 60%, BUSY 30%, OFFLINE 10%
|
||||
- Varied ratings (3.5-5.0), resolution counts, locations
|
||||
|
||||
### Incidents (~4000)
|
||||
- All 11 incident types with realistic weights (CASSETTE_LOW most common, PHYSICAL_DAMAGE least)
|
||||
- All 4 severities: LOW 20%, MEDIUM 35%, HIGH 30%, CRITICAL 15%
|
||||
- All 5 statuses with realistic distribution
|
||||
- ASSIGNED/IN_PROGRESS/RESOLVED/CLOSED linked to technicians
|
||||
- Timestamps spread across last 90 days
|
||||
- RESOLVED/CLOSED have realistic resolution times
|
||||
|
||||
### Journal Events (~50000)
|
||||
- All 23 event types with realistic frequency weights
|
||||
- Event times spread across last 30 days (recent 7 days heavier)
|
||||
- Event sources: API, REMOTE_MONITORING, MANUAL, SCHEDULED, HIVEOPS_AGENT
|
||||
- Card reader events include slot/status fields
|
||||
- Cassette events include type/fill/count/currency fields
|
||||
|
||||
### Fleet Tasks (~2000)
|
||||
- Task kinds: UPDATE_CLIENT 50%, REBOOT 30%, RESTART_CLIENT 20%
|
||||
- Statuses: PENDING 15%, QUEUED 10%, RUNNING 10%, COMPLETED 45%, FAILED 15%, CANCELLED 5%
|
||||
- COMPLETED/FAILED have start/complete timestamps
|
||||
- Some linked to fleet artifacts
|
||||
|
||||
### Fleet Artifacts (12)
|
||||
- AGENT_JAR: hiveops-agent versions (1.0.0 through 1.5.2)
|
||||
- MODULE_JAR: various modules (journal-parser, config-sync, health-monitor, etc.)
|
||||
- Realistic file sizes and SHA256 hashes
|
||||
|
||||
## Script Structure
|
||||
|
||||
```sql
|
||||
BEGIN;
|
||||
|
||||
-- 1. TRUNCATE all tables (CASCADE)
|
||||
-- 2. Reset sequences
|
||||
|
||||
-- 3. INSERT technicians (30 rows, explicit)
|
||||
-- 4. INSERT ATMs (1000 rows via generate_series DO block)
|
||||
-- 5. INSERT atm_properties (1000 rows via DO block)
|
||||
-- 6. INSERT fleet_artifacts (12 rows, explicit)
|
||||
-- 7. INSERT incidents (~4000 via DO block with random distribution)
|
||||
-- 8. INSERT incident_notes (via DO block for non-OPEN incidents)
|
||||
-- 9. INSERT journal_events (~50000 via DO block)
|
||||
-- 10. INSERT fleet_tasks (~2000 via DO block)
|
||||
-- 11. INSERT atm_module_status (~3000 via DO block)
|
||||
-- 12. INSERT workflow_transitions (8 rows, explicit)
|
||||
-- 13. INSERT settings (1 row, explicit)
|
||||
|
||||
COMMIT;
|
||||
```
|
||||
|
||||
## Verification
|
||||
- Run: `psql -h <host> -U postgres -d hiveops_incident -f seed-data.sql`
|
||||
- Verify counts: `SELECT 'atms', count(*) FROM atms UNION ALL SELECT 'incidents', count(*) FROM incidents ...`
|
||||
- Start backend and check frontend pages: Dashboard, Incidents, Event Stats, Fleet Stats, Fleet Tasks
|
||||
|
|
@ -0,0 +1,101 @@
|
|||
# ATM Incident Browser - Implementation Plan
|
||||
|
||||
## Overview
|
||||
Create a restricted Electron-based browser application that only allows access to a configurable URL (atm-incident site). The app will have a standard window with minimize/close buttons and work on both Windows and Linux.
|
||||
|
||||
## Project Structure
|
||||
```
|
||||
hiveops-browser/
|
||||
├── package.json
|
||||
├── electron-builder.yml # Build configuration for Windows/Linux
|
||||
├── src/
|
||||
│ ├── main/
|
||||
│ │ ├── main.js # Electron main process
|
||||
│ │ ├── config.js # Configuration management
|
||||
│ │ └── preload.js # Security bridge
|
||||
│ └── renderer/
|
||||
│ ├── index.html # Simple loading/error UI
|
||||
│ └── styles.css # Basic styles
|
||||
├── config/
|
||||
│ └── default-config.json # Default configuration template
|
||||
└── assets/
|
||||
└── icon.png # Application icon
|
||||
```
|
||||
|
||||
## Key Components
|
||||
|
||||
### 1. Configuration System (`src/main/config.js`)
|
||||
- Store settings in user's app data directory
|
||||
- Configurable options:
|
||||
- `allowedUrl`: Base URL to allow (e.g., `https://atm-incident.example.com`)
|
||||
- `windowWidth`, `windowHeight`: Window dimensions
|
||||
- `allowSubdomains`: Whether to allow subdomains of the base URL
|
||||
- Settings accessible via system tray or menu
|
||||
|
||||
### 2. Main Process (`src/main/main.js`)
|
||||
- Create BrowserWindow with standard frame
|
||||
- Load the configured URL on startup
|
||||
- Intercept all navigation requests
|
||||
- Block navigation to URLs outside the allowed domain
|
||||
- Handle new window requests (open in same window or block)
|
||||
- Provide settings menu via application menu
|
||||
|
||||
### 3. URL Restriction Logic
|
||||
- Parse the allowed URL to extract the domain
|
||||
- On every navigation event (`will-navigate`, `new-window`):
|
||||
- Compare target URL domain against allowed domain
|
||||
- Block if not matching, show notification
|
||||
- Handle redirects appropriately
|
||||
|
||||
### 4. Preload Script (`src/main/preload.js`)
|
||||
- Minimal context bridge for any needed IPC
|
||||
- Expose config reading capability to renderer
|
||||
|
||||
### 5. Build Configuration (`electron-builder.yml`)
|
||||
- **Windows**: Create `.exe` installer and portable version
|
||||
- **Linux**: Create `.AppImage` and `.deb` packages
|
||||
- Include desktop shortcuts automatically
|
||||
|
||||
## Implementation Steps
|
||||
|
||||
1. **Initialize project**
|
||||
- Create `package.json` with Electron and electron-builder dependencies
|
||||
- Set up npm scripts for dev and build
|
||||
|
||||
2. **Create main process**
|
||||
- Implement `main.js` with BrowserWindow creation
|
||||
- Add URL restriction logic with domain validation
|
||||
- Create application menu with Settings option
|
||||
|
||||
3. **Create configuration system**
|
||||
- Implement config loading/saving using `electron-store` or custom JSON
|
||||
- Create settings window for URL configuration
|
||||
- Store in OS-appropriate location (AppData/config)
|
||||
|
||||
4. **Create renderer files**
|
||||
- Simple `index.html` for loading states and errors
|
||||
- Minimal CSS for professional appearance
|
||||
|
||||
5. **Set up build configuration**
|
||||
- Configure `electron-builder.yml` for Windows and Linux
|
||||
- Set up icons and metadata
|
||||
- Configure shortcuts to be created during install
|
||||
|
||||
6. **Testing**
|
||||
- Test URL blocking on various navigation attempts
|
||||
- Test settings persistence
|
||||
- Test builds on both platforms
|
||||
|
||||
## Verification
|
||||
1. Run `npm start` to launch in development mode
|
||||
2. Verify only the configured URL loads
|
||||
3. Attempt to navigate away (should be blocked)
|
||||
4. Change settings and verify persistence
|
||||
5. Build for Windows: `npm run build:win`
|
||||
6. Build for Linux: `npm run build:linux`
|
||||
7. Test installers create proper desktop shortcuts
|
||||
|
||||
## Dependencies
|
||||
- `electron`: ^28.0.0
|
||||
- `electron-builder`: ^24.0.0 (dev)
|
||||
- `electron-store`: ^8.0.0 (for config persistence)
|
||||
File diff suppressed because it is too large
Load Diff
|
|
@ -0,0 +1,65 @@
|
|||
# Plan: Admin API Endpoint — Reset User Password
|
||||
|
||||
## Context
|
||||
hiveops-auth has no password reset functionality. The login page has a dead "Forgot Password" link. The admin role and `/auth/api/admin/**` route prefix already exist. Adding an admin password reset endpoint gives operators a way to reset user passwords without direct DB access. The `PASSWORD_CHANGED` audit event type and `logPasswordChanged()` method are already defined but never called.
|
||||
|
||||
## Files to Create / Modify
|
||||
|
||||
| Action | File |
|
||||
|--------|------|
|
||||
| **CREATE** | `src/main/java/com/hiveops/auth/dto/request/AdminPasswordResetRequest.java` |
|
||||
| **MODIFY** | `src/main/java/com/hiveops/auth/service/AuthService.java` |
|
||||
| **CREATE** | `src/main/java/com/hiveops/auth/controller/UserAdminController.java` |
|
||||
|
||||
## Implementation Steps
|
||||
|
||||
### 1. DTO — `AdminPasswordResetRequest.java`
|
||||
- Single field: `newPassword` (String)
|
||||
- Jakarta validation: `@NotBlank`, `@Size(min=12)` — mirrors the existing `RegisterRequest` pattern
|
||||
- Package: `com.hiveops.auth.dto.request`
|
||||
|
||||
### 2. AuthService — `resetUserPassword(String email, String newPassword)`
|
||||
- Lookup user with `userRepository.findByEmail(email)` → throw `UsernameNotFoundException` (Spring Security) if absent (already handled by `GlobalExceptionHandler` returning 404-ish; check and use an appropriate existing exception or plain `RuntimeException`)
|
||||
- Validate password strength via `PasswordValidator.validatePassword(newPassword)` — throw `IllegalArgumentException` with the error message if invalid
|
||||
- Encode: `passwordEncoder.encode(newPassword)` → `user.setPasswordHash(...)`
|
||||
- Side effects for security:
|
||||
- `user.resetFailedLoginAttempts()` — clears lock + counter
|
||||
- Revoke all existing refresh tokens via `refreshTokenRepository.revokeAllByUser(user)` (already exists — sets `revoked=true`, preserving audit history)
|
||||
- `userRepository.save(user)`
|
||||
- `authAuditService.logPasswordChanged(user)` — already implemented, just unused
|
||||
|
||||
### 3. Controller — `UserAdminController.java`
|
||||
- Pattern mirrors `RateLimitAdminController` exactly:
|
||||
- `@RestController`, `@RequestMapping("/auth/api/admin/users")`, `@RequiredArgsConstructor`, `@Slf4j`
|
||||
- `@Tag(name = "User Admin", description = "User management (Admin only)")`
|
||||
- All endpoints: `@PreAuthorize("hasRole('ADMIN')")`
|
||||
- Single endpoint for now:
|
||||
```
|
||||
POST /auth/api/admin/users/{email}/reset-password
|
||||
Body: { "newPassword": "..." }
|
||||
Response 200: { "message": "Password reset successfully", "email": "..." }
|
||||
Response 404: { "message": "User not found", "email": "..." }
|
||||
Response 400: { "message": "<validation error>", "email": "..." }
|
||||
```
|
||||
- Log at INFO level: `"Admin password reset for user: {}"` (email only, never the password)
|
||||
|
||||
## Key Reused Components
|
||||
- `PasswordEncoder` bean (BCrypt strength 12) — injected into `AuthService`
|
||||
- `PasswordValidator.validatePassword()` — `util/PasswordValidator.java`
|
||||
- `authAuditService.logPasswordChanged(user)` — `service/AuthAuditService.java`
|
||||
- `userRepository.findByEmail()` — `repository/UserRepository.java`
|
||||
- `refreshTokenRepository.revokeAllByUser(user)` — `repository/RefreshTokenRepository.java`
|
||||
- `user.resetFailedLoginAttempts()` — `entity/User.java`
|
||||
- `@PreAuthorize("hasRole('ADMIN')")` pattern — `RateLimitAdminController.java`
|
||||
|
||||
## Verification
|
||||
1. Build: `./mvnw compile` — must be clean
|
||||
2. Start service (dev profile, H2 in-memory)
|
||||
3. Register a test user, obtain an admin JWT
|
||||
4. `POST /auth/api/admin/users/{email}/reset-password` with valid new password → 200
|
||||
5. `POST /auth/api/login` with old password → 401
|
||||
6. `POST /auth/api/login` with new password → 200
|
||||
7. Attempt same endpoint without ADMIN role → 403
|
||||
8. Attempt with unknown email → 404
|
||||
9. Attempt with weak password (< 12 chars) → 400
|
||||
10. Check audit log table for `PASSWORD_CHANGED` event
|
||||
|
|
@ -0,0 +1,180 @@
|
|||
# HiveOps Browser: Agent Download & Properties File Features
|
||||
|
||||
## Summary
|
||||
Add two new menu items to the File menu:
|
||||
1. **Download Agent** - Downloads hiveops-agent binary from API endpoint
|
||||
2. **Create Agent Properties** - Opens popup to create hiveops.properties and ext/ej.properties files
|
||||
|
||||
---
|
||||
|
||||
## Files to Modify
|
||||
|
||||
| File | Changes |
|
||||
|------|---------|
|
||||
| `src/main/main.js` | Add menu items, window handlers, IPC handlers |
|
||||
| `src/main/preload.js` | Add IPC bridge methods |
|
||||
| `src/main/api-client.js` | Add `downloadAgent()` method |
|
||||
|
||||
## New Files to Create
|
||||
|
||||
| File | Purpose |
|
||||
|------|---------|
|
||||
| `src/renderer/agent-properties.html` | Properties form dialog UI (tabbed) |
|
||||
| `src/renderer/agent-properties.js` | Properties form logic |
|
||||
| `src/renderer/download-progress.html` | Download progress dialog |
|
||||
|
||||
---
|
||||
|
||||
## Feature 1: Download Agent
|
||||
|
||||
### Menu Item
|
||||
- Location: File menu, after "License Management"
|
||||
- Label: "Download Agent"
|
||||
|
||||
### Behavior
|
||||
1. Auto-detect platform (Windows/Linux/macOS)
|
||||
2. Show native save dialog with default filename
|
||||
3. Download from API: `GET /api/v1/agent/download?platform={platform}`
|
||||
4. Show progress dialog during download
|
||||
5. Set executable permissions on Linux/macOS
|
||||
6. Show success/error message
|
||||
|
||||
---
|
||||
|
||||
## Feature 2: Create Agent Properties
|
||||
|
||||
### Menu Item
|
||||
- Location: File menu, after "Download Agent"
|
||||
- Label: "Create Agent Properties"
|
||||
|
||||
### UI Design: Tabbed Dialog
|
||||
Two tabs: **Main Config** and **Journal Extension**
|
||||
|
||||
---
|
||||
|
||||
### Tab 1: Main Config (hiveops.properties)
|
||||
|
||||
#### Section: HiveOps Management Server
|
||||
| Field | Default | Description |
|
||||
|-------|---------|-------------|
|
||||
| `mgmt.endpoint` | `https://mgmt.directlx.dev/` | HiveOps management server URL |
|
||||
| `mgmt.api.key` | (empty) | API key for management server |
|
||||
|
||||
#### Section: SmartJournal Server (Journal Upload)
|
||||
| Field | Default | Description |
|
||||
|-------|---------|-------------|
|
||||
| `server.endpoint` | (required) | SmartJournal server URL |
|
||||
| `server.host` | (empty) | Override HTTP host header |
|
||||
| `server.to.connect` | `10000` | Connection timeout (ms) |
|
||||
| `server.to.read` | `10000` | Read timeout (ms) |
|
||||
| `heartbeat.interval` | `5` | Heartbeat interval (minutes) |
|
||||
|
||||
#### Section: ATM Identification
|
||||
| Field | Default | Description |
|
||||
|-------|---------|-------------|
|
||||
| `atm.id` | (required) | ATM identifier (4-40 chars) |
|
||||
| `atm.transit` | (empty) | ATM transit code |
|
||||
| `country` | `US` | ISO 3166-1 country code (2 chars) |
|
||||
|
||||
#### Section: Windows Registry (collapsible, optional)
|
||||
| Field | Default | Description |
|
||||
|-------|---------|-------------|
|
||||
| `atm.idRegistryKey` | (empty) | Registry key for ATM ID |
|
||||
| `atm.idRegistryValue` | (empty) | Registry value for ATM ID |
|
||||
| `atm.transitRegistryKey` | (empty) | Registry key for transit |
|
||||
| `atm.transitRegistryValue` | (empty) | Registry value for transit |
|
||||
|
||||
#### Section: Incident Integration
|
||||
| Field | Default | Description |
|
||||
|-------|---------|-------------|
|
||||
| `incident.endpoint` | `http://localhost:8080` | Incident server URL |
|
||||
| `incident.events.enabled` | `true` | Enable event reporting |
|
||||
| `incident.events.recheck.delay.msec` | `5000` | Recheck delay (ms) |
|
||||
| `incident.events.batch.size` | `100` | Event batch size |
|
||||
| `incident.to.connect` | `10000` | Incident connect timeout |
|
||||
| `incident.to.read` | `30000` | Incident read timeout |
|
||||
|
||||
#### Section: Task Settings
|
||||
| Field | Default | Description |
|
||||
|-------|---------|-------------|
|
||||
| `task.network.cap` | `0.5` | Network utilization cap (0-1) |
|
||||
| `task.network.retries` | `5` | Upload retry attempts |
|
||||
|
||||
#### Section: Directories
|
||||
| Field | Default | Description |
|
||||
|-------|---------|-------------|
|
||||
| `ext.directory` | `ext` | Extensions directory |
|
||||
| `script.directory` | (empty) | Scripts directory |
|
||||
| `modules.disabled` | (empty) | Comma-separated disabled modules |
|
||||
|
||||
---
|
||||
|
||||
### Tab 2: Journal Extension (ext/ej.properties)
|
||||
|
||||
#### Section: Journal Location
|
||||
| Field | Default | Description |
|
||||
|-------|---------|-------------|
|
||||
| `journal.dir` | `/opt/hiveops/journal` | Journal directory path |
|
||||
| `type` | `MAINJOURNAL` | Journal type identifier |
|
||||
|
||||
#### Section: Journal Format
|
||||
| Field | Default | Description |
|
||||
|-------|---------|-------------|
|
||||
| `journal.jrntype` | (dropdown) | Journal parser type |
|
||||
| `filename.format` | `{YYYY}{MM}{DD}.jrn` | Filename pattern |
|
||||
| `file.format` | `UTF8` | File encoding (UTF8/UTF16LE) |
|
||||
|
||||
**journal.jrntype options:**
|
||||
- PT
|
||||
- SCOTIABANK_CB
|
||||
- SCOTIABANK
|
||||
- HYOSUNG_HIP
|
||||
- HYOSUNG_FIS
|
||||
|
||||
#### Section: Upload Settings
|
||||
| Field | Default | Description |
|
||||
|-------|---------|-------------|
|
||||
| `chunk.max` | `100000` | Max chunk size (bytes) |
|
||||
| `chunk.delay.msec` | `0` | Delay between chunks (ms) |
|
||||
| `update.delay.msec` | `300000` | Verification delay (ms) |
|
||||
|
||||
---
|
||||
|
||||
### Save Behavior
|
||||
1. User clicks "Save Properties"
|
||||
2. Show folder picker dialog
|
||||
3. Create selected folder structure:
|
||||
```
|
||||
chosen-folder/
|
||||
├── hiveops.properties
|
||||
└── ext/
|
||||
└── ej.properties
|
||||
```
|
||||
4. Show success message with path
|
||||
|
||||
---
|
||||
|
||||
## Implementation Order
|
||||
|
||||
1. **api-client.js** - Add `downloadAgent()` method
|
||||
2. **preload.js** - Add IPC bridge methods
|
||||
3. **main.js** - Add menu items, windows, IPC handlers
|
||||
4. **download-progress.html** - Progress bar UI
|
||||
5. **agent-properties.html** - Tabbed form with all fields
|
||||
6. **agent-properties.js** - Form handling, validation, save logic
|
||||
|
||||
---
|
||||
|
||||
## Verification
|
||||
|
||||
1. Launch app: `npm start`
|
||||
2. File menu should show "Download Agent" and "Create Agent Properties"
|
||||
3. Test Download Agent:
|
||||
- Verify save dialog appears
|
||||
- Verify progress bar during download
|
||||
- Verify executable permissions set (Linux/macOS)
|
||||
4. Test Create Agent Properties:
|
||||
- Verify tabbed dialog opens
|
||||
- Fill required fields (atm.id, server.endpoint, mgmt.endpoint)
|
||||
- Click Save, choose folder
|
||||
- Verify `hiveops.properties` and `ext/ej.properties` created with correct content
|
||||
|
|
@ -0,0 +1,41 @@
|
|||
# Loading Spinner Overlay
|
||||
|
||||
## Overview
|
||||
Add a reusable loading spinner overlay component with a circular progress indicator, and apply it across all pages that currently show plain "Loading..." text.
|
||||
|
||||
---
|
||||
|
||||
## 1. Create `LoadingSpinner.svelte`
|
||||
**New file:** `frontend/src/components/common/LoadingSpinner.svelte`
|
||||
|
||||
- Semi-transparent backdrop overlay covering the component area
|
||||
- Centered card with CSS-animated circular spinner + configurable message
|
||||
- Props: `message` (string, default "Loading..."), `overlay` (boolean, default true - if false, inline without backdrop)
|
||||
- Dark mode support using existing `:global(.dark-mode)` pattern
|
||||
- Pure CSS animation (no JS dependencies)
|
||||
|
||||
## 2. Apply to Dashboard
|
||||
**File:** `frontend/src/components/Dashboard/Dashboard.svelte`
|
||||
- Replace `<div class="loading">Loading dashboard data...</div>` with `<LoadingSpinner message="Loading dashboard..." />`
|
||||
|
||||
## 3. Apply to other pages
|
||||
Replace all `<div class="loading">...</div>` blocks:
|
||||
|
||||
| File | Current text | New message |
|
||||
|------|-------------|-------------|
|
||||
| `Dashboard.svelte` | "Loading dashboard data..." | "Loading dashboard..." |
|
||||
| `AtmList.svelte` | "Loading ATMs..." | "Loading ATMs..." |
|
||||
| `IncidentList.svelte` | "Loading incidents..." | "Loading incidents..." |
|
||||
| `FleetTasks.svelte` | "Loading tasks..." | "Loading tasks..." |
|
||||
| `JournalEvents.svelte` | "Loading events..." | "Loading events..." |
|
||||
| `AtmHistory.svelte` | "Loading incidents..." | "Loading incidents..." |
|
||||
|
||||
## 4. Remove old `.loading` CSS
|
||||
Remove the plain-text `.loading` class from `Dashboard.css` and any other standalone CSS files where it's replaced.
|
||||
|
||||
---
|
||||
|
||||
## Verification
|
||||
- `cd frontend && npm run build` - no errors
|
||||
- Visual check: spinner appears centered with backdrop on each page during load
|
||||
- Dark mode: spinner card and backdrop adapt correctly
|
||||
|
|
@ -0,0 +1,123 @@
|
|||
# HiveOps Browser — System Info Footer Status Bar
|
||||
|
||||
## Context
|
||||
The HiveOps Browser (Electron 28) currently loads `https://incident.bcos.cloud` directly in the main `BrowserWindow`. The user wants a persistent footer status bar — native to the browser chrome like a menu/status bar — showing: logged-in user, memory usage, CPU usage, and service health. This replaces the fragile approach of having the Svelte frontend call `electronAPI.getUserProfile()`.
|
||||
|
||||
## Architecture Change
|
||||
|
||||
**Current**: `BrowserWindow` → `loadURL('https://incident.bcos.cloud')` (fills full window)
|
||||
|
||||
**New**: `BrowserWindow` (no content) →
|
||||
- `WebContentsView` (top, full width, `height - 32px`) → `loadURL('https://incident.bcos.cloud')`
|
||||
- `WebContentsView` (bottom, full width, `32px`) → `loadFile('src/renderer/statusbar.html')`
|
||||
|
||||
Electron 28's `WebContentsView` API is used (replaces deprecated `BrowserView`).
|
||||
|
||||
---
|
||||
|
||||
## Files to Modify / Create
|
||||
|
||||
### 1. `src/main/preload.js` — Add 2 new IPC bindings
|
||||
```javascript
|
||||
getSystemInfo: () => ipcRenderer.invoke('get-system-info'),
|
||||
getServiceStatus: () => ipcRenderer.invoke('get-service-status'),
|
||||
```
|
||||
|
||||
### 2. `src/main/main.js` — Major changes
|
||||
- Import `WebContentsView` from electron
|
||||
- Create `incidentView` (WebContentsView with preload) and `statusbarView` (WebContentsView with preload)
|
||||
- Replace `mainWindow.loadURL(allowedUrl)` with:
|
||||
- `mainWindow.contentView.addChildView(incidentView)` → `incidentView.webContents.loadURL(allowedUrl)`
|
||||
- `mainWindow.contentView.addChildView(statusbarView)` → `statusbarView.webContents.loadFile('statusbar.html')`
|
||||
- Migrate all `mainWindow.webContents.*` event listeners to `incidentView.webContents.*`:
|
||||
- `did-finish-load`, `dom-ready`, `console-message`, `did-fail-load`, `page-title-updated`, `will-navigate`
|
||||
- Move session header injection to `incidentView.webContents.session.webRequest.onBeforeSendHeaders`
|
||||
- Add `mainWindow.on('resize', updateBounds)` to keep views sized correctly
|
||||
- Add `updateBounds()` helper: sets incidentView to `{x:0, y:0, w, h-32}` and statusbarView to `{x:0, y:h-32, w, h:32}`
|
||||
- Add new IPC handlers:
|
||||
- `get-system-info` → uses Node `os` module: `{ memory: {used, total, percent}, cpu: {load}, user: authManager.getUserInfo() }`
|
||||
- `get-service-status` → pings health endpoints with `net.request()`, returns array of `{name, status: 'up'|'down'|'checking'}`
|
||||
|
||||
### 3. `src/renderer/statusbar.html` — New file
|
||||
Compact 32px dark status bar with 3 sections:
|
||||
|
||||
```
|
||||
[ 👤 John Doe ADMIN ] ──── [ RAM 52% ████░░ 4.2/8GB ] [ CPU 23% ] ──── [ ● Auth ● Incident ● Config ] ──── [ v2.0.40 ]
|
||||
```
|
||||
|
||||
- Dark background `#0f0f1a` with subtle top border `rgba(255,255,255,0.08)`
|
||||
- White/muted text, monospace for metrics
|
||||
- Service dots: green `●` = up, red `●` = down, yellow `●` = checking
|
||||
- Uses `window.electronAPI.getSystemInfo()` + `getServiceStatus()` on load and every 5s
|
||||
- Memory: inline bar visualization
|
||||
- No external dependencies (pure HTML/CSS/JS)
|
||||
|
||||
### Service Health Endpoints (defaults, from known architecture)
|
||||
```javascript
|
||||
const SERVICE_ENDPOINTS = [
|
||||
{ name: 'Auth', url: `${authServerUrl}/actuator/health` },
|
||||
{ name: 'Incident', url: `${allowedUrl}/actuator/health` },
|
||||
{ name: 'Config', url: `${apiBaseUrl.replace('/api/v1','')}/actuator/health` },
|
||||
];
|
||||
```
|
||||
These use the existing config values. Timeout: 3s. Returns up/down per service.
|
||||
|
||||
---
|
||||
|
||||
## Key Technical Details
|
||||
|
||||
### WebContentsView bounds management
|
||||
```javascript
|
||||
function updateBounds() {
|
||||
const STATUS_BAR_H = 32;
|
||||
const [w, h] = mainWindow.getContentSize();
|
||||
incidentView.setBounds({ x: 0, y: 0, width: w, height: h - STATUS_BAR_H });
|
||||
statusbarView.setBounds({ x: 0, y: h - STATUS_BAR_H, width: w, height: STATUS_BAR_H });
|
||||
}
|
||||
mainWindow.on('resize', updateBounds);
|
||||
```
|
||||
|
||||
### Header injection moves to incidentView's session
|
||||
```javascript
|
||||
incidentView.webContents.session.webRequest.onBeforeSendHeaders(...)
|
||||
```
|
||||
|
||||
### System info IPC handler
|
||||
```javascript
|
||||
const os = require('os');
|
||||
ipcMain.handle('get-system-info', () => {
|
||||
const total = os.totalmem();
|
||||
const free = os.freemem();
|
||||
const used = total - free;
|
||||
const [load1] = os.loadavg(); // Linux/Mac; 0 on Windows
|
||||
return {
|
||||
memory: { used, total, percent: Math.round((used / total) * 100) },
|
||||
cpu: { load: Math.min(100, Math.round(load1 * 100 / os.cpus().length)) },
|
||||
user: authManager.getUserInfo(),
|
||||
version: app.getVersion(),
|
||||
};
|
||||
});
|
||||
```
|
||||
|
||||
### Service status uses electron net module
|
||||
```javascript
|
||||
const { net } = require('electron');
|
||||
// HEAD request with 3s timeout per service
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Cleanup
|
||||
- Remove `getUserProfile` call from `hiveops-incident/frontend/src/App.svelte` (the sidebar-user block) since user info is now shown in the browser's own status bar.
|
||||
|
||||
---
|
||||
|
||||
## Build & Verify
|
||||
1. `cd /source/hiveops-src/hiveops-browser && npm start` — dev test
|
||||
2. Confirm status bar appears at bottom of window
|
||||
3. Confirm incident app fills remaining space and works normally
|
||||
4. Confirm user name/role shows (must be logged in)
|
||||
5. Confirm memory/CPU metrics update every 5s
|
||||
6. Confirm service dots show correct status
|
||||
7. `npm run build:linux` — build production package
|
||||
8. Test with `dist/linux-unpacked/hiveops-browser --no-sandbox`
|
||||
|
|
@ -0,0 +1,95 @@
|
|||
# Plan: Scaffold Spring Boot Project + UserService
|
||||
|
||||
## Context
|
||||
New empty project directory. User wants a full Spring Boot project scaffolded, then a `UserService` added following the same pattern as `ProductService`. The prompt referenced Spring Boot 4, which is not yet released — Spring Boot **3.4.x** (latest stable, supports Java 21 and virtual threads) will be used instead.
|
||||
|
||||
## Project Layout
|
||||
|
||||
```
|
||||
/home/directlx/claude-src/cluade-user/
|
||||
├── pom.xml
|
||||
├── CLAUDE.md
|
||||
└── src/
|
||||
├── main/
|
||||
│ ├── java/com/example/app/
|
||||
│ │ ├── AppApplication.java
|
||||
│ │ ├── exception/
|
||||
│ │ │ └── ResourceNotFoundException.java
|
||||
│ │ ├── model/
|
||||
│ │ │ ├── User.java
|
||||
│ │ │ └── Product.java
|
||||
│ │ ├── repository/
|
||||
│ │ │ ├── UserRepository.java
|
||||
│ │ │ └── ProductRepository.java
|
||||
│ │ └── service/
|
||||
│ │ ├── ProductService.java ← reference pattern
|
||||
│ │ └── UserService.java ← main deliverable
|
||||
│ └── resources/
|
||||
│ └── application.properties
|
||||
└── test/
|
||||
└── java/com/example/app/
|
||||
└── service/
|
||||
└── UserServiceTest.java
|
||||
```
|
||||
|
||||
## Key Decisions
|
||||
|
||||
- **Build**: Maven with `spring-boot-starter-parent` 3.4.2
|
||||
- **Java**: 21 (records for entities, sealed classes available)
|
||||
- **DB**: H2 in-memory (dev-ready; swap for PostgreSQL via `application.properties`)
|
||||
- **Dependencies**: `spring-boot-starter-web`, `spring-boot-starter-data-jpa`, `h2`, `spring-boot-starter-test`
|
||||
- **Package**: `com.example.app` (conventional; user can rename)
|
||||
|
||||
## Files to Create
|
||||
|
||||
### `pom.xml`
|
||||
Spring Boot 3.4.2 parent, Java 21, web + JPA + H2 + test starters.
|
||||
|
||||
### `AppApplication.java`
|
||||
Standard `@SpringBootApplication` entry point.
|
||||
|
||||
### `exception/ResourceNotFoundException.java`
|
||||
`@ResponseStatus(HttpStatus.NOT_FOUND)` extending `RuntimeException` — used by both services for proper error handling.
|
||||
|
||||
### `model/Product.java` + `model/User.java`
|
||||
JPA `@Entity` classes with `@Id @GeneratedValue`. User has: `id`, `name`, `email`, `createdAt`. Product has: `id`, `name`, `price`.
|
||||
|
||||
### `repository/ProductRepository.java` + `repository/UserRepository.java`
|
||||
Extend `JpaRepository<T, Long>`. UserRepository adds `findByEmail(String email)`.
|
||||
|
||||
### `service/ProductService.java` (reference pattern)
|
||||
Constructor-injected `ProductRepository`, `@Service`, methods: `findById`, `findAll`, `create`, `update`, `delete` — throws `ResourceNotFoundException` on missing entity.
|
||||
|
||||
### `service/UserService.java` (main deliverable)
|
||||
Identical pattern to ProductService:
|
||||
- `@Service` + constructor injection of `UserRepository`
|
||||
- `findById(Long id)` — throws `ResourceNotFoundException`
|
||||
- `findAll()` — returns `List<User>`
|
||||
- `create(User user)` — saves and returns
|
||||
- `update(Long id, User updated)` — finds, patches fields, saves
|
||||
- `delete(Long id)` — finds (validates existence), then deletes
|
||||
|
||||
### `application.properties`
|
||||
H2 console enabled, JPA `ddl-auto=create-drop`, show-sql for development.
|
||||
|
||||
### `UserServiceTest.java`
|
||||
Mockito-based unit test (`@ExtendWith(MockitoExtension.class)`) covering findById (found + not-found), findAll, create, update, delete.
|
||||
|
||||
### `CLAUDE.md`
|
||||
Project-specific guidance for future Claude sessions.
|
||||
|
||||
## Verification
|
||||
|
||||
```bash
|
||||
# Build
|
||||
mvn clean compile
|
||||
|
||||
# Run tests
|
||||
mvn test
|
||||
|
||||
# Run single test class
|
||||
mvn test -Dtest=UserServiceTest
|
||||
|
||||
# Start the app
|
||||
mvn spring-boot:run
|
||||
```
|
||||
|
|
@ -0,0 +1,169 @@
|
|||
# Plan: Reorganize hiveops-openmetal for Multi-Software Management
|
||||
|
||||
## Context
|
||||
The `hiveops-openmetal` directory is currently structured entirely around HiveOps. The goal is to reorganize it so that multiple software deployments (HiveOps, SmartJournal, and potentially more in the future) can be managed cleanly from this directory. SmartJournal will share some infrastructure (database) but have its own dedicated app instance(s).
|
||||
|
||||
---
|
||||
|
||||
## Proposed Directory Structure
|
||||
|
||||
```
|
||||
hiveops-openmetal/
|
||||
├── hiveops/ # HiveOps software configs
|
||||
│ ├── instances/
|
||||
│ │ ├── services/ # ← moved from instances/services/
|
||||
│ │ └── browser/ # ← moved from instances/browser/
|
||||
│ ├── docker-compose.yml # ← moved from root
|
||||
│ ├── docker-compose.db.yml
|
||||
│ ├── docker-compose.override.yml
|
||||
│ ├── docker-compose.prod.yml
|
||||
│ ├── docker-compose.ssl.yml
|
||||
│ ├── .env # ← moved from root
|
||||
│ ├── .env.example
|
||||
│ └── scripts/ # HiveOps-specific scripts (subset of root scripts/)
|
||||
│
|
||||
├── smartjournal/ # SmartJournal software configs
|
||||
│ ├── instances/
|
||||
│ │ └── services/ # SmartJournal dedicated app instance
|
||||
│ ├── docker-compose.yml # ← bring in from existing SmartJournal config
|
||||
│ ├── .env
|
||||
│ ├── .env.example
|
||||
│ └── scripts/
|
||||
│
|
||||
├── shared/ # Shared infrastructure
|
||||
│ └── database/ # ← moved from instances/database/
|
||||
│ ├── docker-compose.yml
|
||||
│ ├── .env
|
||||
│ ├── .env.example
|
||||
│ ├── postgresql.conf
|
||||
│ ├── backup/
|
||||
│ └── scripts/
|
||||
│
|
||||
├── docs/ # Cross-project / OpenMetal-level docs (kept at root)
|
||||
│ ├── OPENMETAL-SETUP.md
|
||||
│ ├── MULTI-INSTANCE-ARCHITECTURE.md
|
||||
│ └── ...existing docs...
|
||||
│
|
||||
├── scripts/ # Cross-project provisioning scripts (kept at root)
|
||||
│ ├── provision-instances.sh
|
||||
│ ├── deploy-all-instances.sh
|
||||
│ ├── build-all-images.sh
|
||||
│ └── check-prerequisites.sh
|
||||
│
|
||||
├── .env.openmetal # OpenMetal cloud credentials (shared, stays at root)
|
||||
├── .env.openmetal.example
|
||||
├── .env.openmetal.ready
|
||||
├── cloud-init-hiveops.yaml
|
||||
├── .gitignore
|
||||
└── README.md
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## What Moves Where
|
||||
|
||||
| Current Path | New Path |
|
||||
|---|---|
|
||||
| `instances/services/` | `hiveops/instances/services/` |
|
||||
| `instances/browser/` | `hiveops/instances/browser/` |
|
||||
| `instances/database/` | `shared/database/` |
|
||||
| `docker-compose.yml` (root) | `hiveops/docker-compose.yml` |
|
||||
| `docker-compose.*.yml` (root) | `hiveops/docker-compose.*.yml` |
|
||||
| `.env` (root) | `hiveops/.env` |
|
||||
| `.env.example` (root) | `hiveops/.env.example` |
|
||||
| `nginx/` (legacy, root) | `hiveops/nginx/` (or remove if unused) |
|
||||
| `certs/` (root) | `hiveops/certs/` |
|
||||
| `data/` (root) | `hiveops/data/` |
|
||||
|
||||
**Stays at root:**
|
||||
- `docs/` — OpenMetal-level docs remain shared
|
||||
- `scripts/` — provisioning scripts remain shared
|
||||
- `.env.openmetal*` — cloud credentials are global
|
||||
- `cloud-init-hiveops.yaml`
|
||||
- `README.md`
|
||||
- `.gitignore`
|
||||
- `.gitea/`
|
||||
|
||||
---
|
||||
|
||||
## Steps
|
||||
|
||||
### 1. Create new directory skeleton
|
||||
```
|
||||
mkdir -p hiveops/instances
|
||||
mkdir -p smartjournal/instances/services
|
||||
mkdir -p shared/database
|
||||
```
|
||||
|
||||
### 2. Move HiveOps instance directories
|
||||
```
|
||||
mv instances/services hiveops/instances/services
|
||||
mv instances/browser hiveops/instances/browser
|
||||
```
|
||||
|
||||
### 3. Move shared database instance
|
||||
```
|
||||
mv instances/database shared/database
|
||||
```
|
||||
|
||||
### 4. Move root-level HiveOps configs
|
||||
```
|
||||
mv docker-compose.yml hiveops/
|
||||
mv docker-compose.db.yml hiveops/
|
||||
mv docker-compose.override.yml hiveops/
|
||||
mv docker-compose.prod.yml hiveops/
|
||||
mv docker-compose.ssl.yml hiveops/
|
||||
mv .env hiveops/
|
||||
mv .env.example hiveops/
|
||||
mv nginx/ hiveops/
|
||||
mv certs/ hiveops/
|
||||
mv data/ hiveops/
|
||||
```
|
||||
|
||||
### 5. Set up smartjournal/ skeleton
|
||||
Create:
|
||||
- `smartjournal/instances/services/` — placeholder for SmartJournal's dedicated instance config
|
||||
- `smartjournal/docker-compose.yml` — brought in from existing SmartJournal config
|
||||
- `smartjournal/.env.example` — template env file
|
||||
- `smartjournal/README.md` — brief description
|
||||
|
||||
### 6. Update paths in deployment scripts
|
||||
Scripts that reference old paths need updating:
|
||||
- `scripts/deploy-all-instances.sh` — update instance paths
|
||||
- `instances/services/scripts/deploy.sh` → now at `hiveops/instances/services/scripts/deploy.sh`
|
||||
- Any SSH copy commands in docs referencing old paths
|
||||
|
||||
### 7. Update README.md
|
||||
Update the root README to reflect the new structure and include links to `hiveops/` and `smartjournal/` subdirectories.
|
||||
|
||||
### 8. Handle untracked file
|
||||
- `preload.js` (currently untracked at root) — determine if it belongs to HiveOps browser and move accordingly to `hiveops/`
|
||||
|
||||
---
|
||||
|
||||
## Files with Hardcoded Paths to Update
|
||||
|
||||
After moving, check and update these:
|
||||
- `scripts/deploy-all-instances.sh` — references `instances/` paths
|
||||
- `scripts/provision-instances.sh` — may reference `instances/`
|
||||
- `.gitea/workflows/deploy.yml` — check deployment paths
|
||||
- Any `scp` commands in docs pointing to `instances/services/nginx/conf.d/`
|
||||
|
||||
---
|
||||
|
||||
## Verification
|
||||
|
||||
1. Check `hiveops/instances/services/docker-compose.yml` still works:
|
||||
```bash
|
||||
cd hiveops/instances/services && docker compose config
|
||||
```
|
||||
2. Check `shared/database/docker-compose.yml` still works:
|
||||
```bash
|
||||
cd shared/database && docker compose config
|
||||
```
|
||||
3. Verify git status shows moves (not deletions):
|
||||
```bash
|
||||
git status
|
||||
git diff --stat
|
||||
```
|
||||
4. On production: deployment scripts reference new paths correctly (dry-run before deploying)
|
||||
|
|
@ -0,0 +1,234 @@
|
|||
# Project Rename Plan: atm-incident → hiveops-incident
|
||||
|
||||
## Overview
|
||||
Rename the entire project from "atm-incident" to "hiveops-incident" across 121 files, including Java package structure, database name, Docker infrastructure, and documentation.
|
||||
|
||||
## Scope
|
||||
- **Java Package**: `com.atm.incident` → `com.hiveops.incident` (93 files)
|
||||
- **Database**: `atm_incident` → `hiveops_incident` (15 files)
|
||||
- **Docker/Containers**: All containers, images, and networks (25 files)
|
||||
- **Documentation**: All paths and references (16 files)
|
||||
|
||||
## Implementation Steps
|
||||
|
||||
### 1. Pre-Flight Validation
|
||||
- Verify clean git state
|
||||
- Create safety branch: `rename-to-hiveops-incident`
|
||||
- Stop all running Docker services
|
||||
|
||||
### 2. Backend Maven Configuration
|
||||
**Files**: `backend/pom.xml`
|
||||
- Update `<groupId>com.atm</groupId>` → `<groupId>com.hiveops</groupId>`
|
||||
- Validate with `mvn validate`
|
||||
|
||||
### 3. Java Package Rename (Critical - Two-Step Process)
|
||||
|
||||
**Step 3a: Update Package Content First**
|
||||
```bash
|
||||
cd backend/src/main/java/com/atm/incident
|
||||
# Update package declarations in all 93 files
|
||||
find . -name "*.java" -exec sed -i 's/^package com\.atm\.incident/package com.hiveops.incident/g' {} \;
|
||||
# Update all imports
|
||||
find . -name "*.java" -exec sed -i 's/import com\.atm\.incident/import com.hiveops.incident/g' {} \;
|
||||
```
|
||||
|
||||
**Step 3b: Rename Main Application Class**
|
||||
- Update class name in content: `AtmIncidentManagementApplication` → `HiveopsIncidentManagementApplication`
|
||||
- Rename file using `git mv`
|
||||
|
||||
**Step 3c: Move Directory Structure**
|
||||
```bash
|
||||
cd backend/src/main/java/com
|
||||
mkdir -p hiveops/incident
|
||||
git mv atm/incident/* hiveops/incident/
|
||||
git rm -r atm/
|
||||
```
|
||||
|
||||
**Validation**: `mvn clean compile` must succeed
|
||||
|
||||
### 4. Backend Configuration Files
|
||||
**File**: `backend/src/main/resources/application.properties`
|
||||
- `spring.application.name`: `atm-incident-management` → `hiveops-incident-management`
|
||||
- `logging.level`: `com.atm.incident` → `com.hiveops.incident`
|
||||
- Database URL: `atm_incident` → `hiveops_incident`
|
||||
|
||||
### 5. Database Name Changes
|
||||
**Files**: All config files, scripts, and documentation
|
||||
```bash
|
||||
# Docker compose files
|
||||
sed -i 's/atm_incident/hiveops_incident/g' docker-compose.yml docker-compose.prod.yml .env.example
|
||||
|
||||
# DevOps scripts (5 files)
|
||||
find devops-scripts -name "*.sh" -exec sed -i 's/atm_incident/hiveops_incident/g' {} \;
|
||||
|
||||
# Documentation (16 files)
|
||||
find docs -name "*.md" -exec sed -i 's/atm_incident/hiveops_incident/g' {} \;
|
||||
sed -i 's/atm_incident/hiveops_incident/g' README.md CLAUDE.md
|
||||
|
||||
# SQL files
|
||||
sed -i 's/atm_incident/hiveops_incident/g' backend/src/main/resources/db/*.sql SAMPLE_DATA.sql
|
||||
```
|
||||
|
||||
### 6. Docker Infrastructure
|
||||
**Pattern**: `atm-incident-*` → `hiveops-incident-*`
|
||||
|
||||
**Files**: `docker-compose.yml`, `docker-compose.prod.yml`, `.gitea/workflows/build-deploy.yml`, DevOps scripts
|
||||
```bash
|
||||
# Docker compose
|
||||
sed -i 's/atm-incident-/hiveops-incident-/g' docker-compose.yml docker-compose.prod.yml
|
||||
|
||||
# CI/CD pipeline
|
||||
sed -i 's/atm-incident-/hiveops-incident-/g' .gitea/workflows/build-deploy.yml
|
||||
|
||||
# DevOps scripts (11 files)
|
||||
find devops-scripts -name "*.sh" -exec sed -i 's/atm-incident-/hiveops-incident-/g' {} \;
|
||||
find devops-scripts -name "*.sh" -exec sed -i 's/ATM Incident Management System/HiveOps Incident Management System/g' {} \;
|
||||
```
|
||||
|
||||
**Validation**: `docker compose config` must validate
|
||||
|
||||
### 7. Frontend Configuration
|
||||
**Files**: `frontend/package.json`, `frontend/package-lock.json`
|
||||
```bash
|
||||
sed -i 's/"name": "atm-incident-frontend"/"name": "hiveops-incident-frontend"/g' \
|
||||
frontend/package.json frontend/package-lock.json
|
||||
```
|
||||
|
||||
**Validation**: `npm install --dry-run` must succeed
|
||||
|
||||
### 8. Documentation Updates
|
||||
**Files**: All 16 markdown files in `docs/`, plus root documentation
|
||||
```bash
|
||||
# Update deployment paths
|
||||
find docs -name "*.md" -exec sed -i 's|/opt/atm-incident|/opt/hiveops-incident|g' {} \;
|
||||
|
||||
# Update project name
|
||||
find docs -name "*.md" -exec sed -i 's/atm-incident/hiveops-incident/g' {} \;
|
||||
find docs -name "*.md" -exec sed -i 's/ATM Incident Management/HiveOps Incident Management/g' {} \;
|
||||
|
||||
# Update Java package references
|
||||
find docs -name "*.md" -exec sed -i 's/com\.atm\.incident/com.hiveops.incident/g' {} \;
|
||||
|
||||
# Root docs
|
||||
sed -i 's/atm-incident/hiveops-incident/g' README.md
|
||||
sed -i 's/ATM Incident Management/HiveOps Incident Management/g' README.md
|
||||
sed -i 's/com\.atm\.incident/com.hiveops.incident/g' CLAUDE.md
|
||||
```
|
||||
|
||||
### 9. Comprehensive Validation
|
||||
|
||||
**File Count Checks**:
|
||||
```bash
|
||||
# Java files should be 93
|
||||
find backend/src/main/java/com/hiveops/incident -name "*.java" | wc -l
|
||||
|
||||
# Old path should not exist
|
||||
test ! -d backend/src/main/java/com/atm && echo "OK"
|
||||
```
|
||||
|
||||
**No Old References** (should return 0 for each):
|
||||
```bash
|
||||
grep -r "package com\.atm\.incident" backend/src --include="*.java" | wc -l
|
||||
grep -r "import com\.atm\.incident" backend/src --include="*.java" | wc -l
|
||||
grep -r "atm-incident-" . --exclude-dir=.git --exclude-dir=node_modules | wc -l
|
||||
```
|
||||
|
||||
**Build Tests**:
|
||||
```bash
|
||||
# Backend
|
||||
cd backend && mvn clean compile
|
||||
|
||||
# Frontend
|
||||
cd frontend && npm install && npm run check
|
||||
|
||||
# Docker compose syntax
|
||||
docker compose -f docker-compose.yml config > /dev/null
|
||||
docker compose -f docker-compose.prod.yml config > /dev/null
|
||||
```
|
||||
|
||||
### 10. Git Commits (6 Logical Commits)
|
||||
|
||||
1. **Java package structure** - Package rename, directory move, main class
|
||||
2. **Configuration files** - Spring config, logging, database name
|
||||
3. **Docker infrastructure** - Containers, networks, CI/CD
|
||||
4. **DevOps scripts** - All shell scripts
|
||||
5. **Documentation** - All markdown files
|
||||
6. **Database schemas** - SQL files and sample data
|
||||
|
||||
Each commit includes migration context and co-authorship tag.
|
||||
|
||||
### 11. Integration Testing
|
||||
|
||||
**Local Development Test**:
|
||||
```bash
|
||||
docker compose down -v
|
||||
docker compose build
|
||||
docker compose up -d
|
||||
sleep 30
|
||||
docker compose ps # All should be "Up"
|
||||
curl http://localhost:8080/api/atms
|
||||
curl http://localhost:5173
|
||||
docker exec hiveops-incident-db psql -U postgres -c "\l" | grep hiveops_incident
|
||||
```
|
||||
|
||||
**API Compatibility Test**:
|
||||
```bash
|
||||
# Test hiveops-agent integration
|
||||
curl -X POST http://localhost:8080/api/journal-events \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"atmId": 1, "eventType": "CARD_READER_DETECTED", "eventDetails": "Test", "eventSource": "HIVEOPS_AGENT"}'
|
||||
```
|
||||
|
||||
## Critical Files
|
||||
|
||||
1. `backend/src/main/java/com/atm/incident/AtmIncidentManagementApplication.java` - Main class requiring rename
|
||||
2. `backend/pom.xml` - Maven config (update before package changes)
|
||||
3. `backend/src/main/resources/application.properties` - Spring configuration
|
||||
4. `docker-compose.yml` - Development orchestration
|
||||
5. `.gitea/workflows/build-deploy.yml` - CI/CD pipeline
|
||||
|
||||
## Database Migration Notes
|
||||
|
||||
**For Existing Deployments**: Database rename required
|
||||
|
||||
**Option 1: Rename Existing Database** (preserves data):
|
||||
```sql
|
||||
SELECT pg_terminate_backend(pid) FROM pg_stat_activity
|
||||
WHERE datname = 'atm_incident' AND pid <> pg_backend_pid();
|
||||
ALTER DATABASE atm_incident RENAME TO hiveops_incident;
|
||||
```
|
||||
|
||||
**Option 2: Fresh Database** (clean start):
|
||||
```bash
|
||||
./devops-scripts/setup-database.sh # Already updated with new name
|
||||
```
|
||||
|
||||
## Risk Mitigation
|
||||
|
||||
**Java Package Rename Risk**: Update content before moving files; use `git mv` to preserve history
|
||||
**Database Risk**: Rename instead of recreate; backup first
|
||||
**Rollback**: Git branch preserved; can revert commits or reset to previous state
|
||||
|
||||
## Success Criteria
|
||||
|
||||
- [ ] All 93 Java files use `com.hiveops.incident` package
|
||||
- [ ] Zero references to `com.atm.incident` in codebase
|
||||
- [ ] All containers use `hiveops-incident-*` prefix
|
||||
- [ ] Database name is `hiveops_incident` everywhere
|
||||
- [ ] Backend compiles successfully (`mvn clean compile`)
|
||||
- [ ] Frontend builds without errors (`npm run check`)
|
||||
- [ ] Docker Compose starts all services
|
||||
- [ ] API endpoints respond correctly
|
||||
- [ ] hiveops-agent integration works
|
||||
|
||||
## Verification Commands
|
||||
|
||||
After implementation, run:
|
||||
```bash
|
||||
# Check for any remaining old references
|
||||
grep -r "com\.atm\.incident\|atm-incident\|atm_incident" . \
|
||||
--exclude-dir={.git,node_modules,target} \
|
||||
--include=\*.{java,properties,yml,yaml,json,sh,md}
|
||||
|
||||
# Should return empty or only in docs/CLAUDE.md as historical reference
|
||||
```
|
||||
|
|
@ -0,0 +1,235 @@
|
|||
# Plan: hiveops-qds-monitor Microservice
|
||||
|
||||
## Context
|
||||
|
||||
QDS (Quality Data Systems) sends automated ATM alert emails from `atmalerts@qualitydatasystems.com` via MoniManager to `hiveops@branchcos.com`. These are currently landing in a Gmail label called **QDS** but are never acted on programmatically. The goal is a new HiveOps microservice that monitors this inbox and creates (or resolves) incidents in `hiveops-incident` automatically.
|
||||
|
||||
---
|
||||
|
||||
## Email Format (Observed)
|
||||
|
||||
Two event types, identified by subject prefix:
|
||||
- **`[External] Event Occurred`** → create incident
|
||||
- **`[External] Canceled`** → resolve existing incident
|
||||
|
||||
Body contains a structured key-value table:
|
||||
```
|
||||
Terminal # 763330
|
||||
Fault Contents Cash Dispenser Bill Cassette PCU3 Abnormal
|
||||
Fault Device Cash Dispenser
|
||||
Error Code (optional)
|
||||
Date and Time 02/23/2026 06:56:58
|
||||
Ticket No 287886
|
||||
Remaining Total USD90,000.00 (optional)
|
||||
Priority High
|
||||
Severity Critical
|
||||
Problem Desc ...
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## New Service: `hiveops-qds-monitor`
|
||||
|
||||
**Location:** `/source/hiveops-src/hiveops-qds-monitor/` (new repo, Java Spring Boot)
|
||||
|
||||
**Tech:** Spring Boot 3.4.x · Gmail API (OAuth2) · RestTemplate · H2 embedded DB · Spring Scheduler
|
||||
|
||||
---
|
||||
|
||||
## Architecture
|
||||
|
||||
```
|
||||
Gmail (QDS label)
|
||||
│ poll every 2 min (unread only)
|
||||
▼
|
||||
GmailPollerService
|
||||
│ parse body
|
||||
▼
|
||||
EmailParserService ──→ QdsEmailEvent { terminalId, ticketNo, faultDevice,
|
||||
│ faultContents, severity, eventType }
|
||||
│
|
||||
Event Occurred? Canceled?
|
||||
│ │
|
||||
▼ ▼
|
||||
IncidentCreationFlow IncidentResolutionFlow
|
||||
1. AuthService.getJwt() 1. TicketMappingRepo.findByTicketNo()
|
||||
2. AtmLookupService 2. PUT /api/incidents/{id} → RESOLVED
|
||||
GET /api/atms/search 3. TicketMappingRepo.delete()
|
||||
3. QdsIncidentMapper
|
||||
(fault → type, severity)
|
||||
4. POST /api/incidents
|
||||
5. TicketMappingRepo.save(ticketNo → incidentId)
|
||||
│ │
|
||||
└────────── mark email as read ─┘
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Project Structure
|
||||
|
||||
```
|
||||
hiveops-qds-monitor/
|
||||
├── pom.xml
|
||||
├── Dockerfile
|
||||
└── src/main/
|
||||
├── java/com/hiveops/qdsmonitor/
|
||||
│ ├── HiveOpsQdsMonitorApplication.java
|
||||
│ ├── config/
|
||||
│ │ └── AppConfig.java # RestTemplate, Gmail client beans
|
||||
│ ├── scheduler/
|
||||
│ │ └── GmailPollerService.java # @Scheduled, calls Gmail API
|
||||
│ ├── service/
|
||||
│ │ ├── EmailParserService.java # Parse raw email body → QdsEmailEvent
|
||||
│ │ ├── AtmLookupService.java # GET /api/atms/search
|
||||
│ │ ├── AuthService.java # Login + token cache + auto-refresh
|
||||
│ │ └── IncidentApiService.java # POST /api/incidents, PUT .../status
|
||||
│ ├── mapper/
|
||||
│ │ └── QdsIncidentMapper.java # Fault → IncidentType + Severity
|
||||
│ ├── dto/
|
||||
│ │ ├── QdsEmailEvent.java
|
||||
│ │ ├── CreateIncidentRequest.java
|
||||
│ │ └── UpdateIncidentRequest.java
|
||||
│ └── repository/
|
||||
│ └── TicketMappingRepository.java # H2: ticketNo ↔ hiveops incidentId
|
||||
└── resources/
|
||||
└── application.properties
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Key Implementation Details
|
||||
|
||||
### 1. Gmail Polling (`GmailPollerService`)
|
||||
- Use `google-api-services-gmail` Java client library
|
||||
- OAuth2 credentials (client_id, client_secret, refresh_token) from env vars
|
||||
- Query: `label:QDS is:unread` — fetch only unread messages
|
||||
- After processing: mark email as read (`UNREAD` label removed)
|
||||
- Run on `@Scheduled(fixedDelayString = "${qds.poll.interval-ms:120000}")`
|
||||
|
||||
### 2. Email Parsing (`EmailParserService`)
|
||||
- Detect event type from subject: contains `"Event Occurred"` vs `"Canceled"`
|
||||
- Parse body as key-value pairs using regex: `^(\w[\w\s]+?)\s{2,}(.+)$`
|
||||
- Extract: Terminal #, Fault Contents, Fault Device, Error Code, Date and Time, Ticket No, Priority, Severity
|
||||
|
||||
### 3. Fault → IncidentType Mapping (`QdsIncidentMapper`)
|
||||
| Fault Device / Contents | IncidentType |
|
||||
|---|---|
|
||||
| Card Reader | CARD_READER_FAIL |
|
||||
| Cash Dispenser + "cassette" | CASSETTE_LOW / CASSETTE_EMPTY |
|
||||
| Cash Dispenser (other) | DISPENSER_JAM |
|
||||
| Network | NETWORK_ERROR |
|
||||
| Power | POWER_FAILURE |
|
||||
| Item Processing Module | HARDWARE_ERROR |
|
||||
| Out of service / Service | SOFTWARE_ERROR |
|
||||
| Default | HARDWARE_ERROR |
|
||||
|
||||
QDS Severity → HiveOps Severity:
|
||||
- `Critical` → `CRITICAL`
|
||||
- `High` (priority) → `HIGH`
|
||||
- `Normal` → `MEDIUM`
|
||||
- `Low` → `LOW`
|
||||
|
||||
### 4. Auth Service (`AuthService`)
|
||||
- `POST http://hiveops-auth:8082/auth/api/login` with service account credentials
|
||||
- Cache JWT token + expiry in memory
|
||||
- Auto-refresh using refresh token before expiry (check on each API call)
|
||||
- Env vars: `SERVICE_ACCOUNT_EMAIL`, `SERVICE_ACCOUNT_PASSWORD`
|
||||
|
||||
### 5. ATM Lookup (`AtmLookupService`)
|
||||
- `GET http://hiveops-incident:8081/api/atms/search?query={terminalId}`
|
||||
- Return first match's numeric `id` (Long)
|
||||
- If no match found: log `WARN` and skip incident creation (don't create orphan incidents)
|
||||
|
||||
### 6. Incident API (`IncidentApiService`)
|
||||
- Create: `POST http://hiveops-incident:8081/api/incidents`
|
||||
- Description format: `[QDS] Terminal: {terminalId} | Ticket: #{ticketNo} | {faultContents} | {dateTime}`
|
||||
- Resolve: `PUT http://hiveops-incident:8081/api/incidents/{id}` with `{ "status": "RESOLVED" }`
|
||||
|
||||
### 7. Ticket Mapping (H2 Embedded DB)
|
||||
- Table: `ticket_mapping(qds_ticket_no VARCHAR PK, hiveops_incident_id BIGINT, created_at)`
|
||||
- Spring Data JPA with H2 — no external DB needed
|
||||
- Persist across restarts (H2 file mode: `./data/qds-monitor`)
|
||||
|
||||
---
|
||||
|
||||
## Incident Description Format
|
||||
```
|
||||
[QDS] Terminal: 763330 | Ticket: #287886 | Cash Dispenser Bill Cassette PCU3 Abnormal | 02/23/2026 06:56:58
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Files to Create
|
||||
|
||||
| File | Action |
|
||||
|---|---|
|
||||
| `/source/hiveops-src/hiveops-qds-monitor/pom.xml` | New — Spring Boot 3.4, Gmail API, H2 |
|
||||
| `/source/hiveops-src/hiveops-qds-monitor/Dockerfile` | New — multi-stage Java build |
|
||||
| `/source/hiveops-src/hiveops-qds-monitor/src/...` | New — all Java source files above |
|
||||
| `/source/hiveops-src/hiveops-qds-monitor/src/main/resources/application.properties` | New |
|
||||
|
||||
## Files to Modify
|
||||
|
||||
| File | Change |
|
||||
|---|---|
|
||||
| `hiveops/instances/services/docker-compose.yml` | Add `hiveops-qds-monitor` service block |
|
||||
| `hiveops/instances/services/.env` | Add Gmail OAuth + service account env vars |
|
||||
| `hiveops/.env` | Add `QDS_MONITOR_VERSION=latest` |
|
||||
|
||||
---
|
||||
|
||||
## Docker Service Definition (to add to docker-compose.yml)
|
||||
|
||||
```yaml
|
||||
hiveops-qds-monitor:
|
||||
image: ${REGISTRY_URL}/hiveops-qds-monitor:${QDS_MONITOR_VERSION:-latest}
|
||||
restart: unless-stopped
|
||||
environment:
|
||||
- GMAIL_CLIENT_ID=${GMAIL_CLIENT_ID}
|
||||
- GMAIL_CLIENT_SECRET=${GMAIL_CLIENT_SECRET}
|
||||
- GMAIL_REFRESH_TOKEN=${GMAIL_REFRESH_TOKEN}
|
||||
- GMAIL_LABEL_NAME=QDS
|
||||
- INCIDENT_API_URL=http://hiveops-incident:8081
|
||||
- AUTH_API_URL=http://hiveops-auth:8082
|
||||
- SERVICE_ACCOUNT_EMAIL=${QDS_SERVICE_ACCOUNT_EMAIL}
|
||||
- SERVICE_ACCOUNT_PASSWORD=${QDS_SERVICE_ACCOUNT_PASSWORD}
|
||||
- POLL_INTERVAL_MS=120000
|
||||
volumes:
|
||||
- ./data/qds-monitor:/app/data
|
||||
depends_on:
|
||||
- hiveops-incident
|
||||
- hiveops-auth
|
||||
networks:
|
||||
- hiveops-network
|
||||
logging:
|
||||
driver: "json-file"
|
||||
options:
|
||||
max-size: "10m"
|
||||
max-file: "3"
|
||||
healthcheck:
|
||||
test: ["CMD", "bash", "-c", "exec 3<>/dev/tcp/127.0.0.1/8080 && echo -e 'GET /actuator/health HTTP/1.0\r\n\r\n' >&3 && cat <&3 | grep -q 'UP'"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Required Gmail OAuth Setup (one-time, before deployment)
|
||||
|
||||
Before deploying, a one-time OAuth2 flow must be run to generate a refresh token for `hiveops@branchcos.com`:
|
||||
1. Create a Google Cloud project with Gmail API enabled
|
||||
2. Create OAuth2 credentials (Desktop app type)
|
||||
3. Run OAuth consent flow to get `refresh_token`
|
||||
4. Store `client_id`, `client_secret`, `refresh_token` in `.env`
|
||||
|
||||
---
|
||||
|
||||
## Verification
|
||||
|
||||
1. Build the jar locally: `mvn clean package -DskipTests`
|
||||
2. Run locally with env vars set, verify it connects to Gmail and logs emails found
|
||||
3. Deploy to docker-compose, check logs: `docker compose logs -f hiveops-qds-monitor`
|
||||
4. Send a test email to the QDS label and confirm incident created in hiveops-incident UI
|
||||
5. Mark the incident as resolved by sending a "Canceled" test email and confirm status change
|
||||
6. Check H2 ticket mapping is persisted across service restart
|
||||
|
|
@ -0,0 +1,88 @@
|
|||
# Settings Pages: Improve & Visual Overhaul
|
||||
|
||||
## Summary
|
||||
Improve and visually overhaul the 5 existing Settings tabs, add a 6th Logging tab, expose backend fields missing from the frontend, and align styling with the app's design system (typography variables, blue gradient headers, card patterns, dark mode).
|
||||
|
||||
## Files to Modify
|
||||
|
||||
| File | Action |
|
||||
|------|--------|
|
||||
| `frontend/src/components/Settings/settings.ts` | Add 5 missing fields to interface + defaults |
|
||||
| `frontend/src/components/Settings/Settings.svelte` | Overhaul template + styles (details below) |
|
||||
| `frontend/src/app.css` | Add shared color CSS variables to `:root` |
|
||||
| `frontend/src/App.svelte` | Add Logging sidebar tab, add `textarea` dark mode rule |
|
||||
|
||||
## Step-by-Step
|
||||
|
||||
### 1. `settings.ts` - Add missing backend fields
|
||||
Add to `Settings` interface and `defaultSettings`:
|
||||
- `showEmptyStates: boolean` (default `true`) - Display tab
|
||||
- `requireAssignmentForProgress: boolean` (default `true`) - Workflow tab
|
||||
- `slaWarningThresholdMinutes: number` (default `30`) - Workflow tab
|
||||
- `requirePasswordChangeDays: number` (default `90`) - Security tab
|
||||
- `ipWhitelist: string[]` (default `[]`) - Security tab
|
||||
|
||||
### 2. `app.css` - Add global color variables
|
||||
Move from Settings.svelte scoped `:global` into `:root`:
|
||||
```
|
||||
--color-primary: #2563eb
|
||||
--color-primary-hover: #1d4ed8
|
||||
--color-danger: #d32f2f
|
||||
--color-success: #2e7d32
|
||||
--color-border: #e0e0e0
|
||||
--color-text-primary: #1a1a1a
|
||||
--color-text-secondary: #666
|
||||
--color-bg-light: #fafbfc
|
||||
```
|
||||
|
||||
### 3. `Settings.svelte` - Template improvements
|
||||
|
||||
**Header:** Remove local `.header` overrides - let global blue gradient apply. Restyle action buttons as white/transparent variants for visibility on blue background.
|
||||
|
||||
**Each tab section:** Upgrade `.setting-section` from flat dividers to mini-cards (subtle background, rounded corners, border). Add icons to `<h3>` headings.
|
||||
|
||||
**New content per tab:**
|
||||
- **Display:** Add `showEmptyStates` checkbox
|
||||
- **Workflow:** Add `requireAssignmentForProgress` checkbox, add SLA Configuration section with `slaWarningThresholdMinutes`
|
||||
- **Security:** Add Password Policy section (`requirePasswordChangeDays`), add IP whitelist textarea (visible when `ipWhitelistEnabled` is true)
|
||||
- **Logging (new tab):** Activity Logging toggle, Log Retention days, API Tracking toggle (all fields already in store but unexposed)
|
||||
|
||||
### 4. `Settings.svelte` - Style overhaul
|
||||
|
||||
**Typography:** Replace all hardcoded px font sizes with CSS variables:
|
||||
- `28px` h1 -> remove (global `.header h1` handles it)
|
||||
- `22px` h2 -> `var(--font-size-section-title)`
|
||||
- `16px` h3 -> `var(--font-size-card-title)`
|
||||
- `14px` labels/body -> `var(--font-size-body)` / `var(--font-size-label)`
|
||||
- `13px` help text -> `var(--font-size-caption)`
|
||||
|
||||
**Colors:** Replace hardcoded colors with CSS variables from step 2.
|
||||
|
||||
**Setting sections:** Card-like sub-sections:
|
||||
```css
|
||||
.setting-section {
|
||||
background: var(--color-bg-light);
|
||||
border-radius: 8px;
|
||||
padding: 20px;
|
||||
border: 1px solid var(--color-border);
|
||||
}
|
||||
```
|
||||
|
||||
**Remove** the `:global { ... }` block defining local CSS vars (moved to app.css).
|
||||
|
||||
### 5. `App.svelte` - Sidebar + dark mode
|
||||
|
||||
Add to `settingsTabs` array:
|
||||
```js
|
||||
{ id: 'logging', label: 'Logging', icon: '📝' }
|
||||
```
|
||||
|
||||
Add `textarea` to existing dark mode input rules (line ~773).
|
||||
|
||||
Add rendering for logging tab in the settings view section.
|
||||
|
||||
## Verification
|
||||
```bash
|
||||
cd frontend && npm run build
|
||||
```
|
||||
Build should complete with no new errors. Visually: all settings tabs should render with blue gradient header, card-style sections, correct typography, and proper dark mode support.
|
||||
|
|
@ -0,0 +1,514 @@
|
|||
# hiveops-remote Implementation Plan
|
||||
|
||||
## Overview
|
||||
|
||||
Create **hiveops-remote** at `/source/hiveops-src/hiveops-remote/` with:
|
||||
- **hiveops-remote-server** - Java/Spring Boot control plane + module JAR hosting
|
||||
- **hiveops-remote-module** - Java module JAR implementing AgentModule SPI
|
||||
|
||||
The module JAR is downloaded from the server when the remote feature is activated, then loaded dynamically into hiveops-agent.
|
||||
|
||||
## Requirements
|
||||
|
||||
- **Screen Capture**: Static on-demand screenshots
|
||||
- **File System**: Browse directories + download files
|
||||
- **License Tiers**: Pro and Enterprise only
|
||||
- **Distribution**: Module JAR downloaded from server on activation
|
||||
- **Location**: `/source/hiveops-src/hiveops-remote/`
|
||||
|
||||
---
|
||||
|
||||
## Architecture
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ hiveops-remote │
|
||||
├─────────────────────────────┬───────────────────────────────────┤
|
||||
│ hiveops-remote-server │ hiveops-remote-module │
|
||||
│ (Spring Boot) │ (AgentModule JAR) │
|
||||
│ │ │
|
||||
│ - Control Plane API │ Downloaded & loaded into │
|
||||
│ - Command Queue │ hiveops-agent at runtime │
|
||||
│ - Module JAR Hosting │ │
|
||||
│ - Binary Result Storage │ - Screen Capture │
|
||||
│ │ - File Browser │
|
||||
│ PostgreSQL │ - File Download │
|
||||
│ │ - Result Upload │
|
||||
└─────────────────────────────┴───────────────────────────────────┘
|
||||
|
||||
Flow:
|
||||
1. Agent checks for remote module availability
|
||||
2. Downloads hiveops-remote-module.jar from server
|
||||
3. ModuleLoader loads JAR via SPI
|
||||
4. Module starts polling server for commands
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Project Structure (Maven Multi-Module)
|
||||
|
||||
```
|
||||
/source/hiveops-src/hiveops-remote/
|
||||
├── pom.xml # Parent POM
|
||||
├── hiveops-remote-server/ # Spring Boot server
|
||||
│ ├── pom.xml
|
||||
│ └── src/main/java/com/hiveops/remote/server/
|
||||
│ ├── RemoteServerApplication.java
|
||||
│ ├── config/
|
||||
│ │ └── SecurityConfig.java
|
||||
│ ├── controller/
|
||||
│ │ ├── AgentController.java # Agent endpoints (poll, upload)
|
||||
│ │ ├── RemoteController.java # User endpoints (commands)
|
||||
│ │ └── ModuleController.java # Module JAR download
|
||||
│ ├── service/
|
||||
│ │ ├── AgentService.java
|
||||
│ │ ├── CommandService.java
|
||||
│ │ └── BinaryService.java
|
||||
│ ├── repository/
|
||||
│ │ ├── AgentRepository.java
|
||||
│ │ ├── CommandRepository.java
|
||||
│ │ └── BinaryChunkRepository.java
|
||||
│ ├── entity/
|
||||
│ │ ├── RemoteAgent.java
|
||||
│ │ ├── RemoteCommand.java
|
||||
│ │ └── BinaryChunk.java
|
||||
│ └── dto/
|
||||
│ ├── AgentRegistrationRequest.java
|
||||
│ ├── CommandRequest.java
|
||||
│ └── ...
|
||||
├── hiveops-remote-module/ # Agent module JAR
|
||||
│ ├── pom.xml
|
||||
│ └── src/main/java/com/hiveops/remote/module/
|
||||
│ ├── RemoteModule.java # AgentModule SPI implementation
|
||||
│ ├── RemoteCommandProcessor.java # Command polling & dispatch
|
||||
│ ├── RemoteHttpClient.java # Server communication
|
||||
│ ├── screen/
|
||||
│ │ ├── ScreenCaptureService.java
|
||||
│ │ ├── LinuxScreenCapture.java # X11/Wayland capture
|
||||
│ │ └── WindowsScreenCapture.java # Robot/GDI capture
|
||||
│ ├── filesystem/
|
||||
│ │ ├── FileSystemService.java
|
||||
│ │ ├── DirectoryLister.java
|
||||
│ │ ├── FileDownloader.java
|
||||
│ │ └── PathValidator.java # Security
|
||||
│ └── dto/
|
||||
│ ├── RemoteCommand.java
|
||||
│ ├── ScreenshotParams.java
|
||||
│ └── FileListParams.java
|
||||
├── hiveops-remote-common/ # Shared DTOs between server & module
|
||||
│ ├── pom.xml
|
||||
│ └── src/main/java/com/hiveops/remote/common/
|
||||
│ ├── dto/
|
||||
│ │ ├── CommandType.java
|
||||
│ │ ├── CommandStatus.java
|
||||
│ │ ├── PollResponse.java
|
||||
│ │ └── FileEntry.java
|
||||
│ └── security/
|
||||
│ └── PathSecurityUtils.java
|
||||
└── deployments/
|
||||
├── docker/
|
||||
│ └── Dockerfile
|
||||
└── docker-compose.yml
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Module Loading Flow
|
||||
|
||||
### 1. Check for Remote Module (in hiveops-agent)
|
||||
|
||||
The existing hiveops-agent can be enhanced to check for downloadable modules:
|
||||
|
||||
```java
|
||||
// In AgentApplication or a new ModuleDownloader
|
||||
public void checkRemoteModules() {
|
||||
String moduleUrl = config.get("remote.module.url");
|
||||
if (moduleUrl != null && remoteEnabled) {
|
||||
Path modulePath = downloadModule(moduleUrl, "hiveops-remote-module.jar");
|
||||
moduleLoader.loadFromPath(modulePath);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 2. Module SPI Registration
|
||||
|
||||
**File**: `hiveops-remote-module/src/main/resources/META-INF/services/com.hiveops.core.module.AgentModule`
|
||||
```
|
||||
com.hiveops.remote.module.RemoteModule
|
||||
```
|
||||
|
||||
### 3. RemoteModule Implementation
|
||||
|
||||
```java
|
||||
public class RemoteModule implements AgentModule {
|
||||
private RemoteCommandProcessor processor;
|
||||
|
||||
@Override
|
||||
public String getName() {
|
||||
return "hiveops-remote";
|
||||
}
|
||||
|
||||
@Override
|
||||
public void initialize(ModuleContext context) {
|
||||
String serverUrl = context.getConfig("remote.server.url");
|
||||
String agentToken = context.getConfig("remote.agent.token");
|
||||
|
||||
RemoteHttpClient client = new RemoteHttpClient(serverUrl, agentToken);
|
||||
processor = new RemoteCommandProcessor(client, context);
|
||||
}
|
||||
|
||||
@Override
|
||||
public void start() {
|
||||
processor.startPolling();
|
||||
}
|
||||
|
||||
@Override
|
||||
public void stop() {
|
||||
processor.stopPolling();
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Database Schema (PostgreSQL)
|
||||
|
||||
```sql
|
||||
-- In hiveops-remote-server
|
||||
|
||||
CREATE TABLE remote_agents (
|
||||
id BIGSERIAL PRIMARY KEY,
|
||||
agent_id UUID UNIQUE NOT NULL DEFAULT gen_random_uuid(),
|
||||
name VARCHAR(255) NOT NULL,
|
||||
hostname VARCHAR(255),
|
||||
platform VARCHAR(50) NOT NULL,
|
||||
agent_version VARCHAR(50),
|
||||
module_version VARCHAR(50),
|
||||
status VARCHAR(50) NOT NULL DEFAULT 'OFFLINE',
|
||||
last_heartbeat_at TIMESTAMPTZ,
|
||||
capabilities JSONB,
|
||||
license_key VARCHAR(255),
|
||||
machine_id VARCHAR(255),
|
||||
created_at TIMESTAMPTZ DEFAULT NOW(),
|
||||
updated_at TIMESTAMPTZ DEFAULT NOW(),
|
||||
UNIQUE(license_key, machine_id)
|
||||
);
|
||||
|
||||
CREATE TABLE remote_commands (
|
||||
id BIGSERIAL PRIMARY KEY,
|
||||
command_id UUID UNIQUE NOT NULL DEFAULT gen_random_uuid(),
|
||||
agent_id BIGINT NOT NULL REFERENCES remote_agents(id) ON DELETE CASCADE,
|
||||
type VARCHAR(50) NOT NULL,
|
||||
status VARCHAR(50) NOT NULL DEFAULT 'PENDING',
|
||||
parameters JSONB,
|
||||
result JSONB,
|
||||
error_message TEXT,
|
||||
created_at TIMESTAMPTZ DEFAULT NOW(),
|
||||
started_at TIMESTAMPTZ,
|
||||
completed_at TIMESTAMPTZ
|
||||
);
|
||||
|
||||
CREATE TABLE binary_chunks (
|
||||
id BIGSERIAL PRIMARY KEY,
|
||||
command_id BIGINT NOT NULL REFERENCES remote_commands(id) ON DELETE CASCADE,
|
||||
chunk_index INT NOT NULL,
|
||||
total_chunks INT NOT NULL,
|
||||
data BYTEA NOT NULL,
|
||||
checksum VARCHAR(64),
|
||||
UNIQUE(command_id, chunk_index)
|
||||
);
|
||||
|
||||
-- Module JAR versions
|
||||
CREATE TABLE module_versions (
|
||||
id BIGSERIAL PRIMARY KEY,
|
||||
version VARCHAR(50) NOT NULL UNIQUE,
|
||||
filename VARCHAR(255) NOT NULL,
|
||||
checksum VARCHAR(64) NOT NULL,
|
||||
size_bytes BIGINT NOT NULL,
|
||||
released_at TIMESTAMPTZ DEFAULT NOW(),
|
||||
is_latest BOOLEAN DEFAULT FALSE
|
||||
);
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## API Endpoints
|
||||
|
||||
### Module Distribution
|
||||
|
||||
```
|
||||
GET /api/v1/modules/latest - Get latest module version info
|
||||
GET /api/v1/modules/{version}/download - Download module JAR
|
||||
```
|
||||
|
||||
### Agent-Facing (for hiveops-remote-module)
|
||||
|
||||
```
|
||||
POST /api/v1/agents/register - Register agent with module
|
||||
GET /api/v1/agents/poll - Poll for pending commands (long-poll 15s)
|
||||
POST /api/v1/agents/heartbeat - Heartbeat update
|
||||
POST /api/v1/commands/{id}/status - Update command status
|
||||
POST /api/v1/commands/{id}/upload - Upload binary result (chunked)
|
||||
```
|
||||
|
||||
### User-Facing
|
||||
|
||||
```
|
||||
GET /api/v1/agents - List registered agents
|
||||
GET /api/v1/agents/{id} - Get agent details
|
||||
DELETE /api/v1/agents/{id} - Unregister agent
|
||||
|
||||
POST /api/v1/commands/screenshot - Request screenshot
|
||||
POST /api/v1/commands/file-list - List directory
|
||||
POST /api/v1/commands/file-download - Download file
|
||||
GET /api/v1/commands/{id} - Get command status
|
||||
GET /api/v1/commands/{id}/result - Download binary result
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Screen Capture Implementation
|
||||
|
||||
### Linux (X11)
|
||||
|
||||
```java
|
||||
public class LinuxScreenCapture {
|
||||
public byte[] capture(ScreenshotParams params) throws Exception {
|
||||
// Option 1: Use Robot (requires X11 display)
|
||||
Robot robot = new Robot();
|
||||
Rectangle screenRect = new Rectangle(Toolkit.getDefaultToolkit().getScreenSize());
|
||||
BufferedImage image = robot.createScreenCapture(screenRect);
|
||||
|
||||
// Encode as JPEG
|
||||
ByteArrayOutputStream baos = new ByteArrayOutputStream();
|
||||
ImageIO.write(image, "jpg", baos);
|
||||
return baos.toByteArray();
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Windows
|
||||
|
||||
```java
|
||||
public class WindowsScreenCapture {
|
||||
public byte[] capture(ScreenshotParams params) throws Exception {
|
||||
// Same Robot API works on Windows
|
||||
Robot robot = new Robot();
|
||||
Rectangle screenRect = new Rectangle(Toolkit.getDefaultToolkit().getScreenSize());
|
||||
BufferedImage image = robot.createScreenCapture(screenRect);
|
||||
|
||||
ByteArrayOutputStream baos = new ByteArrayOutputStream();
|
||||
ImageIO.write(image, "jpg", baos);
|
||||
return baos.toByteArray();
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Note**: Java's `Robot` class works cross-platform for basic screen capture. For headless Linux, may need Xvfb or alternative approach.
|
||||
|
||||
---
|
||||
|
||||
## File System Service
|
||||
|
||||
```java
|
||||
public class FileSystemService {
|
||||
private final PathValidator pathValidator;
|
||||
|
||||
public List<FileEntry> listDirectory(String path) {
|
||||
pathValidator.validate(path); // Security check
|
||||
|
||||
Path dir = Paths.get(path);
|
||||
return Files.list(dir)
|
||||
.map(p -> new FileEntry(
|
||||
p.getFileName().toString(),
|
||||
Files.isDirectory(p) ? "DIRECTORY" : "FILE",
|
||||
Files.size(p),
|
||||
Files.getLastModifiedTime(p).toInstant(),
|
||||
getPosixPermissions(p)
|
||||
))
|
||||
.collect(Collectors.toList());
|
||||
}
|
||||
|
||||
public InputStream downloadFile(String path) {
|
||||
pathValidator.validate(path);
|
||||
pathValidator.checkMaxSize(path);
|
||||
return Files.newInputStream(Paths.get(path));
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Security
|
||||
|
||||
1. **Agent Auth**: License key + machine ID → JWT token (validated against hiveops-mgmt or standalone)
|
||||
2. **Path Security**: Configurable allowed paths, block `..` traversal
|
||||
3. **File Size Limit**: Configurable max (default 100MB)
|
||||
4. **Module Signature**: JAR checksum verification on download
|
||||
|
||||
---
|
||||
|
||||
## Maven Configuration
|
||||
|
||||
**Parent pom.xml**:
|
||||
```xml
|
||||
<groupId>com.hiveops</groupId>
|
||||
<artifactId>hiveops-remote</artifactId>
|
||||
<version>1.0.0-SNAPSHOT</version>
|
||||
<packaging>pom</packaging>
|
||||
|
||||
<modules>
|
||||
<module>hiveops-remote-common</module>
|
||||
<module>hiveops-remote-module</module>
|
||||
<module>hiveops-remote-server</module>
|
||||
</modules>
|
||||
|
||||
<properties>
|
||||
<java.version>21</java.version>
|
||||
<spring-boot.version>3.4.1</spring-boot.version>
|
||||
</properties>
|
||||
```
|
||||
|
||||
**hiveops-remote-module pom.xml**:
|
||||
```xml
|
||||
<dependencies>
|
||||
<dependency>
|
||||
<groupId>com.hiveops</groupId>
|
||||
<artifactId>hiveops-core</artifactId>
|
||||
<version>3.0.1-SNAPSHOT</version>
|
||||
<scope>provided</scope> <!-- Provided by hiveops-agent -->
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>com.hiveops</groupId>
|
||||
<artifactId>hiveops-remote-common</artifactId>
|
||||
<version>${project.version}</version>
|
||||
</dependency>
|
||||
</dependencies>
|
||||
|
||||
<build>
|
||||
<plugins>
|
||||
<plugin>
|
||||
<groupId>org.apache.maven.plugins</groupId>
|
||||
<artifactId>maven-shade-plugin</artifactId>
|
||||
<configuration>
|
||||
<!-- Bundle only remote-common, exclude hiveops-core -->
|
||||
</configuration>
|
||||
</plugin>
|
||||
</plugins>
|
||||
</build>
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Docker Deployment
|
||||
|
||||
**docker-compose.yml**:
|
||||
```yaml
|
||||
version: '3.8'
|
||||
services:
|
||||
hiveops-remote-server:
|
||||
build: .
|
||||
ports:
|
||||
- "8090:8090"
|
||||
environment:
|
||||
- DB_HOST=postgres
|
||||
- DB_NAME=hiveops_remote
|
||||
volumes:
|
||||
- ./modules:/app/modules # Module JARs served from here
|
||||
depends_on:
|
||||
- postgres
|
||||
|
||||
postgres:
|
||||
image: postgres:16-alpine
|
||||
environment:
|
||||
- POSTGRES_DB=hiveops_remote
|
||||
- POSTGRES_USER=hiveops
|
||||
- POSTGRES_PASSWORD=hiveops
|
||||
volumes:
|
||||
- pgdata:/var/lib/postgresql/data
|
||||
|
||||
volumes:
|
||||
pgdata:
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Implementation Order
|
||||
|
||||
### Phase 1: Project Setup
|
||||
1. Create Maven multi-module project at `/source/hiveops-src/hiveops-remote/`
|
||||
2. Setup parent pom.xml with modules
|
||||
3. Create hiveops-remote-common with shared DTOs
|
||||
|
||||
### Phase 2: Server (hiveops-remote-server)
|
||||
4. Create Spring Boot application
|
||||
5. Create entities and repositories
|
||||
6. Create AgentController (register, poll, upload)
|
||||
7. Create RemoteController (user commands)
|
||||
8. Create ModuleController (JAR download)
|
||||
9. Database migrations
|
||||
|
||||
### Phase 3: Module (hiveops-remote-module)
|
||||
10. Create RemoteModule implementing AgentModule SPI
|
||||
11. Create RemoteCommandProcessor (polling loop)
|
||||
12. Implement ScreenCaptureService
|
||||
13. Implement FileSystemService + PathValidator
|
||||
14. SPI registration in META-INF/services
|
||||
|
||||
### Phase 4: Agent Integration
|
||||
15. Add module download capability to hiveops-agent
|
||||
16. Test dynamic module loading
|
||||
17. Configuration for remote module URL
|
||||
|
||||
### Phase 5: Docker & Testing
|
||||
18. Create Dockerfile
|
||||
19. Create docker-compose.yml
|
||||
20. End-to-end testing
|
||||
|
||||
---
|
||||
|
||||
## Configuration
|
||||
|
||||
**hiveops-agent hiveops.properties** (to enable remote module):
|
||||
```properties
|
||||
# Remote module configuration
|
||||
remote.enabled=true
|
||||
remote.server.url=http://localhost:8090
|
||||
remote.module.download.url=http://localhost:8090/api/v1/modules/latest/download
|
||||
remote.license.key=${LICENSE_KEY}
|
||||
remote.machine.id=${MACHINE_ID}
|
||||
```
|
||||
|
||||
**hiveops-remote-server application.yml**:
|
||||
```yaml
|
||||
server:
|
||||
port: 8090
|
||||
|
||||
spring:
|
||||
datasource:
|
||||
url: jdbc:postgresql://${DB_HOST:localhost}:5432/${DB_NAME:hiveops_remote}
|
||||
username: ${DB_USER:hiveops}
|
||||
password: ${DB_PASSWORD:hiveops}
|
||||
|
||||
remote:
|
||||
poll-timeout-ms: 15000
|
||||
command-timeout-ms: 300000
|
||||
module-storage-path: /app/modules
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Verification
|
||||
|
||||
1. **Build**: `mvn clean package` in hiveops-remote
|
||||
2. **Start Server**: `docker-compose up -d`
|
||||
3. **Test Module Download**:
|
||||
```bash
|
||||
curl http://localhost:8090/api/v1/modules/latest
|
||||
curl -O http://localhost:8090/api/v1/modules/1.0.0/download
|
||||
```
|
||||
4. **Integration Test**:
|
||||
- Start hiveops-agent with remote.enabled=true
|
||||
- Verify module downloads and loads
|
||||
- Request screenshot via API
|
||||
- Verify result returned
|
||||
|
|
@ -0,0 +1,44 @@
|
|||
# Plan: Document Favorites Feature Session
|
||||
|
||||
## Context
|
||||
This session added the Favorites data-access and service layer to the Spring Boot app.
|
||||
The goal is to capture everything done — files created, design decisions, and API surface —
|
||||
in a new `docs/favorites-feature.md`, consistent with the style of existing docs files
|
||||
(`docs/setup-summary.md`, `docs/integration-testing.md`).
|
||||
|
||||
---
|
||||
|
||||
## File to Create
|
||||
|
||||
### `docs/favorites-feature.md`
|
||||
|
||||
Content to include:
|
||||
|
||||
1. **What was built** — one-sentence summary
|
||||
2. **Files created** — table of the two new files with their package paths
|
||||
3. **FavoriteRepository** — method table (name, return type, reason)
|
||||
- Note on `@Transactional` on `deleteByUserIdAndItemId`
|
||||
4. **FavoriteService** — method table (name, behaviour / error thrown)
|
||||
- Note on class-level `@Transactional` + `readOnly = true` overrides on reads
|
||||
5. **Design decisions** — duplicate guard in `addFavorite`, existence check in `removeFavorite`
|
||||
6. **Verification** — `mvn clean compile`
|
||||
|
||||
---
|
||||
|
||||
## Critical files (read-only reference)
|
||||
|
||||
| File | Role |
|
||||
|---|---|
|
||||
| `src/main/java/com/example/app/repository/FavoriteRepository.java` | Created this session |
|
||||
| `src/main/java/com/example/app/service/FavoriteService.java` | Created this session |
|
||||
| `docs/setup-summary.md` | Style reference |
|
||||
|
||||
---
|
||||
|
||||
## Verification
|
||||
|
||||
After creation:
|
||||
```bash
|
||||
# No compilation impact, but confirm the project still compiles clean
|
||||
mvn clean compile
|
||||
```
|
||||
|
|
@ -0,0 +1,293 @@
|
|||
# Plan: External URLs Feature
|
||||
|
||||
## Context
|
||||
User wants to manage bookmarked external URLs in HiveOps Browser:
|
||||
1. A new "External URLs" tab in the Settings window to add/remove named URL entries
|
||||
2. A "External URLs" menu item in the application menu listing those URLs
|
||||
3. Clicking a URL from the menu opens it in a new tab **within the main window** (not a new OS window)
|
||||
|
||||
## Architecture Decision: BrowserView Tab Bar
|
||||
The main window uses BrowserViews (no parent HTML). Adding in-window tabs requires:
|
||||
- A `tabbarView` BrowserView (36px tall) at the top of the main window — visible only when at least one external tab is open
|
||||
- Each external URL tab gets its own `BrowserView` when opened, persisted in `externalViews` Map
|
||||
- `updateBounds()` is updated to account for the tab bar height
|
||||
- The "home" tab always uses the existing `incidentView`
|
||||
|
||||
## Data Structure (stored via electron-store)
|
||||
```json
|
||||
"externalUrls": [
|
||||
{ "id": "uuid-1", "name": "My Dashboard", "url": "https://example.com" }
|
||||
]
|
||||
```
|
||||
IDs are generated with `crypto.randomUUID()`.
|
||||
|
||||
---
|
||||
|
||||
## Files to Modify
|
||||
|
||||
### 1. `src/main/config.js`
|
||||
- Add `externalUrls: []` to `defaultConfig`
|
||||
- Add `externalUrls` to `getAll()` return object
|
||||
- Add to `setAll()`: `if (settings.externalUrls !== undefined) store.set('externalUrls', settings.externalUrls)`
|
||||
- Add convenience methods:
|
||||
- `getExternalUrls()` → `store.get('externalUrls', [])`
|
||||
- `setExternalUrls(urls)` → `store.set('externalUrls', urls)`
|
||||
|
||||
### 2. `src/main/main.js`
|
||||
|
||||
**New variables (top of file):**
|
||||
```javascript
|
||||
let tabbarView = null;
|
||||
const externalViews = new Map(); // tabId → BrowserView
|
||||
let activeTabId = 'home';
|
||||
```
|
||||
|
||||
**Modify `updateBounds()`:**
|
||||
```javascript
|
||||
function updateBounds() {
|
||||
if (!mainWindow || mainWindow.isDestroyed()) return;
|
||||
if (!incidentView || !statusbarView) return;
|
||||
const STATUS_BAR_H = 32;
|
||||
const TAB_BAR_H = (tabbarView && externalViews.size > 0) ? 36 : 0;
|
||||
const [w, h] = mainWindow.getContentSize();
|
||||
|
||||
if (tabbarView) {
|
||||
tabbarView.setBounds({ x: 0, y: 0, width: TAB_BAR_H > 0 ? w : 0, height: TAB_BAR_H });
|
||||
}
|
||||
|
||||
const contentH = h - STATUS_BAR_H - TAB_BAR_H;
|
||||
// Show only active view; hide others by setting zero bounds
|
||||
[['home', incidentView], ...externalViews].forEach(([tid, view]) => {
|
||||
if (view && !view.webContents.isDestroyed()) {
|
||||
if (tid === activeTabId) {
|
||||
view.setBounds({ x: 0, y: TAB_BAR_H, width: w, height: contentH });
|
||||
} else {
|
||||
view.setBounds({ x: 0, y: TAB_BAR_H, width: 0, height: 0 });
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
statusbarView.setBounds({ x: 0, y: h - STATUS_BAR_H, width: w, height: STATUS_BAR_H });
|
||||
}
|
||||
```
|
||||
|
||||
**Modify `createMainWindow()`** — after creating incidentView and statusbarView, create tabbarView:
|
||||
```javascript
|
||||
tabbarView = new BrowserView({
|
||||
webPreferences: {
|
||||
preload: path.join(__dirname, 'preload.js'),
|
||||
nodeIntegration: false,
|
||||
contextIsolation: true
|
||||
}
|
||||
});
|
||||
mainWindow.addBrowserView(tabbarView);
|
||||
tabbarView.webContents.loadFile(path.join(__dirname, '../renderer/tabbar.html'));
|
||||
```
|
||||
Also add `tabbarView` cleanup in the `closed` event.
|
||||
|
||||
**Add `openExternalTab(id, name, url)` function:**
|
||||
```javascript
|
||||
function openExternalTab(id, name, url) {
|
||||
if (externalViews.has(id)) {
|
||||
switchToTab(id);
|
||||
return;
|
||||
}
|
||||
const view = new BrowserView({ webPreferences: { nodeIntegration: false, contextIsolation: true } });
|
||||
mainWindow.addBrowserView(view);
|
||||
externalViews.set(id, view);
|
||||
view.webContents.loadURL(url);
|
||||
switchToTab(id);
|
||||
}
|
||||
```
|
||||
|
||||
**Add `switchToTab(tabId)` function:**
|
||||
```javascript
|
||||
function switchToTab(tabId) {
|
||||
activeTabId = tabId;
|
||||
updateBounds();
|
||||
sendTabsToTabbar();
|
||||
}
|
||||
```
|
||||
|
||||
**Add `closeExternalTab(tabId)` function:**
|
||||
```javascript
|
||||
function closeExternalTab(tabId) {
|
||||
const view = externalViews.get(tabId);
|
||||
if (view) {
|
||||
mainWindow.removeBrowserView(view);
|
||||
view.webContents.destroy();
|
||||
externalViews.delete(tabId);
|
||||
}
|
||||
if (activeTabId === tabId) {
|
||||
activeTabId = 'home';
|
||||
}
|
||||
updateBounds();
|
||||
sendTabsToTabbar();
|
||||
}
|
||||
```
|
||||
|
||||
**Add `sendTabsToTabbar()` function:**
|
||||
```javascript
|
||||
function sendTabsToTabbar() {
|
||||
if (!tabbarView || tabbarView.webContents.isDestroyed()) return;
|
||||
const tabs = [
|
||||
{ id: 'home', name: 'HiveOps', closeable: false }
|
||||
];
|
||||
for (const [id] of externalViews) {
|
||||
const urlEntry = config.getExternalUrls().find(u => u.id === id);
|
||||
tabs.push({ id, name: urlEntry ? urlEntry.name : id, closeable: true });
|
||||
}
|
||||
tabbarView.webContents.send('update-tabs', { tabs, activeTabId });
|
||||
}
|
||||
```
|
||||
|
||||
**Modify `createMenu()`** — add "External URLs" menu after the existing File menu (or as a top-level menu):
|
||||
```javascript
|
||||
{
|
||||
label: 'External URLs',
|
||||
submenu: buildExternalUrlsSubmenu()
|
||||
}
|
||||
```
|
||||
|
||||
**Add `buildExternalUrlsSubmenu()` function:**
|
||||
```javascript
|
||||
function buildExternalUrlsSubmenu() {
|
||||
const urls = config.getExternalUrls();
|
||||
if (urls.length === 0) {
|
||||
return [
|
||||
{ label: 'No URLs configured', enabled: false },
|
||||
{ type: 'separator' },
|
||||
{ label: 'Manage External URLs...', click: () => openSettings() }
|
||||
];
|
||||
}
|
||||
return [
|
||||
...urls.map(entry => ({
|
||||
label: entry.name,
|
||||
click: () => openExternalTab(entry.id, entry.name, entry.url)
|
||||
})),
|
||||
{ type: 'separator' },
|
||||
{ label: 'Manage External URLs...', click: () => openSettings() }
|
||||
];
|
||||
}
|
||||
```
|
||||
|
||||
**Add IPC handlers:**
|
||||
```javascript
|
||||
ipcMain.handle('get-external-urls', () => config.getExternalUrls());
|
||||
|
||||
ipcMain.handle('save-external-urls', (event, urls) => {
|
||||
config.setExternalUrls(urls);
|
||||
createMenu(); // rebuild menu with updated URLs
|
||||
return config.getExternalUrls();
|
||||
});
|
||||
|
||||
ipcMain.on('switch-tab', (event, tabId) => switchToTab(tabId));
|
||||
|
||||
ipcMain.on('close-tab', (event, tabId) => closeExternalTab(tabId));
|
||||
```
|
||||
|
||||
**Settings window height** — increase from 530 to 600:
|
||||
```javascript
|
||||
settingsWindow = new BrowserWindow({ width: 500, height: 600, ... });
|
||||
```
|
||||
|
||||
### 3. `src/main/preload.js`
|
||||
Add to the `contextBridge.exposeInMainWorld` object:
|
||||
```javascript
|
||||
getExternalUrls: () => ipcRenderer.invoke('get-external-urls'),
|
||||
saveExternalUrls: (urls) => ipcRenderer.invoke('save-external-urls', urls),
|
||||
switchTab: (tabId) => ipcRenderer.send('switch-tab', tabId),
|
||||
closeTab: (tabId) => ipcRenderer.send('close-tab', tabId),
|
||||
onUpdateTabs: (callback) => ipcRenderer.on('update-tabs', (event, data) => callback(data)),
|
||||
```
|
||||
|
||||
### 4. `src/renderer/settings.html`
|
||||
- Add tab button: `<button class="tab-btn" data-tab="externalurls">External URLs</button>`
|
||||
- Add tab panel `#tab-externalurls`:
|
||||
- Form row with `name` text input + `url` URL input + "Add" button
|
||||
- Unordered list `#external-urls-list` where each item shows: name, URL, delete button
|
||||
- JavaScript:
|
||||
- `loadExternalUrls()` — calls `window.electronAPI.getExternalUrls()` and renders the list
|
||||
- "Add" button handler: validates inputs, adds entry with `crypto.randomUUID()`, saves
|
||||
- Delete button handler: removes entry by id, saves
|
||||
- External URLs are saved independently (not via the main form submit); each add/delete auto-saves
|
||||
|
||||
### 5. New file: `src/renderer/tabbar.html`
|
||||
Tab bar UI that receives tab data from main process and sends switch/close events back:
|
||||
```html
|
||||
<!DOCTYPE html>
|
||||
<html>
|
||||
<head>
|
||||
<meta charset="UTF-8">
|
||||
<meta http-equiv="Content-Security-Policy" content="default-src 'self'; style-src 'self' 'unsafe-inline'; script-src 'self' 'unsafe-inline';">
|
||||
<link rel="stylesheet" href="tabbar.css">
|
||||
</head>
|
||||
<body>
|
||||
<div id="tab-container"></div>
|
||||
<script>
|
||||
window.electronAPI.onUpdateTabs(({ tabs, activeTabId }) => {
|
||||
const container = document.getElementById('tab-container');
|
||||
container.innerHTML = '';
|
||||
tabs.forEach(tab => {
|
||||
const el = document.createElement('div');
|
||||
el.className = 'tab' + (tab.id === activeTabId ? ' active' : '');
|
||||
el.dataset.id = tab.id;
|
||||
const label = document.createElement('span');
|
||||
label.className = 'tab-label';
|
||||
label.textContent = tab.name;
|
||||
label.addEventListener('click', () => window.electronAPI.switchTab(tab.id));
|
||||
el.appendChild(label);
|
||||
if (tab.closeable) {
|
||||
const close = document.createElement('button');
|
||||
close.className = 'tab-close';
|
||||
close.textContent = '×';
|
||||
close.addEventListener('click', (e) => { e.stopPropagation(); window.electronAPI.closeTab(tab.id); });
|
||||
el.appendChild(close);
|
||||
}
|
||||
container.appendChild(el);
|
||||
});
|
||||
});
|
||||
</script>
|
||||
</body>
|
||||
</html>
|
||||
```
|
||||
|
||||
### 6. New file: `src/renderer/tabbar.css`
|
||||
```css
|
||||
* { margin: 0; padding: 0; box-sizing: border-box; }
|
||||
body { background: #2c3e50; display: flex; align-items: center; height: 36px; overflow: hidden; -webkit-app-region: no-drag; user-select: none; }
|
||||
#tab-container { display: flex; height: 100%; gap: 2px; padding: 4px 8px; }
|
||||
.tab { display: flex; align-items: center; gap: 4px; padding: 0 12px; background: #3d5166; color: #adb5bd; border-radius: 4px; cursor: pointer; max-width: 200px; font-size: 13px; transition: background 0.15s, color 0.15s; }
|
||||
.tab:hover { background: #4e6a82; color: #fff; }
|
||||
.tab.active { background: #2196f3; color: #fff; }
|
||||
.tab-label { overflow: hidden; text-overflow: ellipsis; white-space: nowrap; }
|
||||
.tab-close { background: none; border: none; color: inherit; cursor: pointer; font-size: 16px; line-height: 1; padding: 0 2px; opacity: 0.7; }
|
||||
.tab-close:hover { opacity: 1; }
|
||||
```
|
||||
|
||||
### 7. `src/renderer/settings.css`
|
||||
Add styles for the URL list management UI:
|
||||
```css
|
||||
.url-list { list-style: none; margin-top: 0.5rem; }
|
||||
.url-list-item { display: flex; align-items: center; gap: 0.5rem; padding: 0.4rem 0; border-bottom: 1px solid #eee; }
|
||||
.url-list-item .url-name { font-weight: 500; min-width: 100px; }
|
||||
.url-list-item .url-href { flex: 1; color: #666; font-size: 0.85rem; overflow: hidden; text-overflow: ellipsis; white-space: nowrap; }
|
||||
.url-add-row { display: flex; gap: 0.5rem; margin-top: 0.5rem; }
|
||||
.url-add-row input[name="newUrlName"] { width: 120px; }
|
||||
.url-add-row input[name="newUrlHref"] { flex: 1; }
|
||||
.btn-sm { padding: 0.25rem 0.6rem; font-size: 0.85rem; }
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Verification
|
||||
1. Run `npm start`
|
||||
2. Open Settings (Ctrl+,) → verify "External URLs" tab appears
|
||||
3. Add a URL entry (name + URL) → verify it appears in the list
|
||||
4. Click Save & Apply (or verify auto-save on add)
|
||||
5. Open the application menu → verify "External URLs" submenu shows the entry
|
||||
6. Click the URL from the menu → verify a tab bar appears at top of main window with "HiveOps" and the new tab
|
||||
7. Click the new tab → verify the external URL loads
|
||||
8. Click "HiveOps" tab → verify switch back to incidentView
|
||||
9. Click × on the external tab → verify tab closes, tab bar disappears if no more external tabs
|
||||
10. Delete URL in settings → verify menu updates after close/reopen
|
||||
|
|
@ -0,0 +1,236 @@
|
|||
# Admin Password Reset Feature Implementation Plan
|
||||
|
||||
## Context
|
||||
|
||||
The HiveOps management portal currently lacks password management functionality. Portal users can update their profile (name, email) but cannot change passwords. The management page needs the ability for administrators to reset user passwords.
|
||||
|
||||
**User Requirement:** Admin users should be able to manually set new passwords for portal users through the management interface.
|
||||
|
||||
**Current State:**
|
||||
- User entity has `passwordHash` field with BCrypt encoding (strength 12)
|
||||
- No password change/reset endpoints exist
|
||||
- Customer management exists at `/portal/customers` but no portal user management
|
||||
- Audit logging system already in place for admin actions
|
||||
- Portal uses Thymeleaf + Spring Boot with role-based access control
|
||||
|
||||
## Implementation Approach
|
||||
|
||||
### Architecture Decision
|
||||
Create a new **Portal Users Management** section at `/portal/users` (separate from customer management) to:
|
||||
- Maintain separation between Customers (licensees) and Users (portal accounts)
|
||||
- Allow future extensions (user creation, role changes, account management)
|
||||
- Follow existing portal pattern of dedicated sections
|
||||
|
||||
### Backend Changes
|
||||
|
||||
#### 1. Create PortalUserController.java
|
||||
**Path:** `/src/main/java/com/hiveops/mgmt/controller/portal/PortalUserController.java`
|
||||
|
||||
**Endpoints:**
|
||||
- `GET /portal/users` - List all users (paginated, searchable)
|
||||
- `GET /portal/users/{id}` - View user details
|
||||
- `GET /portal/users/{id}/reset-password` - Show password reset form
|
||||
- `POST /portal/users/{id}/reset-password` - Process password reset
|
||||
- `POST /portal/users/{id}/enable` - Enable user account
|
||||
- `POST /portal/users/{id}/disable` - Disable user account
|
||||
|
||||
**Patterns to follow:**
|
||||
- Use `@PreAuthorize("hasRole('ADMIN')")` for authorization
|
||||
- Follow PortalCustomerController patterns (flash messages, error handling, audit logging)
|
||||
- Extract client IP using `getClientIp()` helper method
|
||||
|
||||
#### 2. Create ResetPasswordRequest.java DTO
|
||||
**Path:** `/src/main/java/com/hiveops/mgmt/dto/request/ResetPasswordRequest.java`
|
||||
|
||||
**Fields:**
|
||||
```java
|
||||
@NotBlank(message = "New password is required")
|
||||
@Size(min = 8, max = 100, message = "Password must be between 8 and 100 characters")
|
||||
private String newPassword;
|
||||
|
||||
@NotBlank(message = "Password confirmation is required")
|
||||
private String confirmPassword;
|
||||
```
|
||||
|
||||
Add custom validator to ensure passwords match.
|
||||
|
||||
#### 3. Enhance UserService.java
|
||||
**Path:** `/src/main/java/com/hiveops/mgmt/service/UserService.java`
|
||||
|
||||
**New methods:**
|
||||
- `Page<User> findAll(Pageable pageable)` - Paginated user list
|
||||
- `Page<User> searchUsers(String query, Pageable pageable)` - Search by email/name
|
||||
- `User resetPassword(Long userId, String newPassword, User adminUser, String ipAddress)` - Reset with BCrypt encoding and audit logging
|
||||
- `User enableUser(Long userId, User adminUser, String ipAddress)` - Enable user
|
||||
- `User disableUser(Long userId, User adminUser, String ipAddress)` - Disable user
|
||||
|
||||
**Key logic:**
|
||||
- Use injected `PasswordEncoder` for BCrypt hashing
|
||||
- Call `AuditService.log()` for all operations
|
||||
- Prevent admin from disabling themselves
|
||||
|
||||
#### 4. Enhance UserRepository.java
|
||||
**Path:** `/src/main/java/com/hiveops/mgmt/repository/UserRepository.java`
|
||||
|
||||
**New methods:**
|
||||
```java
|
||||
@Query("SELECT u FROM User u WHERE " +
|
||||
"LOWER(u.email) LIKE LOWER(CONCAT('%', :query, '%')) OR " +
|
||||
"LOWER(u.name) LIKE LOWER(CONCAT('%', :query, '%'))")
|
||||
Page<User> search(@Param("query") String query, Pageable pageable);
|
||||
```
|
||||
|
||||
#### 5. Update AuditLog.java
|
||||
**Path:** `/src/main/java/com/hiveops/mgmt/entity/AuditLog.java`
|
||||
|
||||
**Add enum values:**
|
||||
- `USER_PASSWORD_RESET`
|
||||
- `USER_ENABLED`
|
||||
- `USER_DISABLED`
|
||||
|
||||
#### 6. Enhance AuditService.java
|
||||
**Path:** `/src/main/java/com/hiveops/mgmt/service/AuditService.java`
|
||||
|
||||
**New methods:**
|
||||
- `logPasswordReset(User targetUser, User adminUser, String ipAddress)`
|
||||
- `logUserEnabled(User targetUser, User adminUser, String ipAddress)`
|
||||
- `logUserDisabled(User targetUser, User adminUser, String ipAddress)`
|
||||
|
||||
Log details should include target user email and admin email.
|
||||
|
||||
#### 7. Update SecurityConfig.java
|
||||
**Path:** `/src/main/java/com/hiveops/mgmt/config/SecurityConfig.java`
|
||||
|
||||
Add authorization rule:
|
||||
```java
|
||||
.requestMatchers("/portal/users/**").hasRole("ADMIN")
|
||||
```
|
||||
|
||||
### Frontend Changes
|
||||
|
||||
#### 8. Create User List Template
|
||||
**Path:** `/src/main/resources/templates/portal/users/list.html`
|
||||
|
||||
**Features:**
|
||||
- Table: Email, Name, Role, Status (badge), Last Login, Created Date
|
||||
- Search bar (filter by email/name)
|
||||
- Role filter dropdown
|
||||
- Action buttons: View, Reset Password
|
||||
- Pagination (follow customers/list.html pattern)
|
||||
|
||||
#### 9. Create User View Template
|
||||
**Path:** `/src/main/resources/templates/portal/users/view.html`
|
||||
|
||||
**Layout:** Two-column (8-4 grid) following customers/view.html
|
||||
|
||||
**Left column:** User details card (email, name, role, status, UUID, timestamps, licenses)
|
||||
|
||||
**Right column:** Actions card (Reset Password button, Enable/Disable button)
|
||||
|
||||
#### 10. Create Password Reset Form
|
||||
**Path:** `/src/main/resources/templates/portal/users/reset-password.html`
|
||||
|
||||
**Form fields:**
|
||||
- Display user email (read-only context)
|
||||
- New Password (password input)
|
||||
- Confirm Password (password input)
|
||||
- Password requirements hint box (Bootstrap alert)
|
||||
- Cancel and Reset Password buttons
|
||||
|
||||
**Validation:**
|
||||
- Server-side validation errors with `.invalid-feedback`
|
||||
- Match validation for password confirmation
|
||||
|
||||
#### 11. Update Sidebar Navigation
|
||||
**Path:** `/src/main/resources/templates/portal/layout/base.html`
|
||||
|
||||
Add menu item (after Customers):
|
||||
```html
|
||||
<a th:href="@{/portal/users}"
|
||||
th:classappend="${activePage == 'users'} ? 'active' : ''"
|
||||
class="list-group-item list-group-item-action bg-dark text-white-50"
|
||||
sec:authorize="hasRole('ADMIN')">
|
||||
<i class="bi bi-person-badge me-2"></i>Users
|
||||
</a>
|
||||
```
|
||||
|
||||
## Security & Validation
|
||||
|
||||
**Password Requirements:**
|
||||
- Minimum 8 characters
|
||||
- Maximum 100 characters
|
||||
- Passwords must match (confirmation)
|
||||
- BCrypt encoding with strength 12 (existing)
|
||||
|
||||
**Authorization:**
|
||||
- All endpoints require ADMIN role
|
||||
- Cannot reset own password through this interface
|
||||
- Cannot disable own account
|
||||
|
||||
**Audit Logging:**
|
||||
- Every password reset logged with target user email, admin email, and IP address
|
||||
- Enable/disable actions also logged
|
||||
|
||||
## Implementation Sequence
|
||||
|
||||
1. Update AuditLog entity (add enum values)
|
||||
2. Update AuditService (add logging methods)
|
||||
3. Update UserRepository (add search method)
|
||||
4. Create ResetPasswordRequest DTO
|
||||
5. Update UserService (add password reset and user management methods)
|
||||
6. Create PortalUserController
|
||||
7. Update SecurityConfig (add authorization rule)
|
||||
8. Create templates directory: `/templates/portal/users/`
|
||||
9. Create list.html template
|
||||
10. Create view.html template
|
||||
11. Create reset-password.html template
|
||||
12. Update base.html (add sidebar menu item)
|
||||
|
||||
## Verification
|
||||
|
||||
**Manual Testing:**
|
||||
1. Login as admin user (admin@directlx.dev / admin123)
|
||||
2. Navigate to Users menu item in sidebar
|
||||
3. Verify user list displays with search functionality
|
||||
4. Click on a user to view details
|
||||
5. Click Reset Password and set a new password
|
||||
6. Verify password validation (min length, match requirement)
|
||||
7. Submit form and verify success message
|
||||
8. Verify audit log entry was created
|
||||
9. Test login with new password
|
||||
10. Test Enable/Disable functionality
|
||||
11. Verify admin cannot disable own account
|
||||
|
||||
**Database Verification:**
|
||||
- Check `users` table - password_hash should be updated
|
||||
- Check `audit_logs` table - USER_PASSWORD_RESET entry should exist with correct details
|
||||
|
||||
## Critical Files
|
||||
|
||||
**Backend (Create/Modify):**
|
||||
- `/src/main/java/com/hiveops/mgmt/controller/portal/PortalUserController.java` (NEW)
|
||||
- `/src/main/java/com/hiveops/mgmt/dto/request/ResetPasswordRequest.java` (NEW)
|
||||
- `/src/main/java/com/hiveops/mgmt/service/UserService.java` (MODIFY - add methods)
|
||||
- `/src/main/java/com/hiveops/mgmt/repository/UserRepository.java` (MODIFY - add search)
|
||||
- `/src/main/java/com/hiveops/mgmt/entity/AuditLog.java` (MODIFY - add enums)
|
||||
- `/src/main/java/com/hiveops/mgmt/service/AuditService.java` (MODIFY - add methods)
|
||||
- `/src/main/java/com/hiveops/mgmt/config/SecurityConfig.java` (MODIFY - add rule)
|
||||
|
||||
**Frontend (Create):**
|
||||
- `/src/main/resources/templates/portal/users/list.html` (NEW)
|
||||
- `/src/main/resources/templates/portal/users/view.html` (NEW)
|
||||
- `/src/main/resources/templates/portal/users/reset-password.html` (NEW)
|
||||
|
||||
**Frontend (Modify):**
|
||||
- `/src/main/resources/templates/portal/layout/base.html` (MODIFY - add menu item)
|
||||
|
||||
## Reusable Components
|
||||
|
||||
**Existing utilities to leverage:**
|
||||
- `PasswordEncoder` bean (SecurityConfig.java) - BCrypt strength 12
|
||||
- `AuditService.log()` - Audit logging pattern
|
||||
- `getClientIp(HttpServletRequest)` - IP extraction helper
|
||||
- Flash messages pattern from PortalCustomerController
|
||||
- Pagination pattern from customers/list.html
|
||||
- Bootstrap 5 components and styling
|
||||
- Thymeleaf validation error display patterns
|
||||
|
|
@ -0,0 +1,56 @@
|
|||
# Add Flyway Database Migration Support
|
||||
|
||||
## Context
|
||||
The project has 4 Flyway-named migration files in `db/migration/` (V1-V4) but **Flyway is not in the dependencies**. Production uses `ddl-auto: validate` with no data seeding, so the V4 legal content was never inserted. Dev works because it uses `create-drop` + `db/h2/data.sql`. The goal: add Flyway so migrations run automatically on production PostgreSQL.
|
||||
|
||||
## Approach
|
||||
- **Production**: Flyway enabled, baseline at V3 (V1-V3 already applied manually), V4+ runs automatically
|
||||
- **Dev (H2)**: Flyway disabled — migration SQL uses PostgreSQL-specific syntax (`E'...\n...'`, `::jsonb`, `ON CONFLICT`) that H2 can't handle. Keep existing `create-drop` + `data.sql`
|
||||
|
||||
## Files to Modify
|
||||
|
||||
### 1. `pom.xml` — Add Flyway dependencies
|
||||
Add after the PostgreSQL dependency (line ~63):
|
||||
```xml
|
||||
<dependency>
|
||||
<groupId>org.flywaydb</groupId>
|
||||
<artifactId>flyway-core</artifactId>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.flywaydb</groupId>
|
||||
<artifactId>flyway-database-postgresql</artifactId>
|
||||
</dependency>
|
||||
```
|
||||
|
||||
### 2. `src/main/resources/application.yml` — Configure Flyway per profile
|
||||
|
||||
**Dev profile** — disable Flyway (keep existing H2 setup):
|
||||
```yaml
|
||||
spring:
|
||||
flyway:
|
||||
enabled: false
|
||||
```
|
||||
|
||||
**Prod profile** — enable Flyway with baseline:
|
||||
```yaml
|
||||
spring:
|
||||
flyway:
|
||||
enabled: true
|
||||
baseline-on-migrate: true
|
||||
baseline-version: 3
|
||||
locations: classpath:db/migration
|
||||
```
|
||||
- `baseline-on-migrate: true` — on first run, creates `flyway_schema_history` table and marks V1-V3 as already applied
|
||||
- `baseline-version: 3` — everything up to V3 is assumed to already exist in the production DB
|
||||
- V4 (legal content) and any future migrations will execute automatically
|
||||
|
||||
## No Other Changes Needed
|
||||
- Migration files (V1-V4) are already correctly named and contain valid PostgreSQL SQL
|
||||
- Dev `data.sql` already seeds the same data using H2-compatible syntax
|
||||
- `ddl-auto: validate` stays in prod (Flyway manages schema, Hibernate validates)
|
||||
|
||||
## Verification
|
||||
1. **Dev**: `mvn spring-boot:run` — H2 starts as before, `data.sql` seeds data, Flyway is off
|
||||
2. **Prod**: Deploy with PostgreSQL — Flyway creates `flyway_schema_history`, baselines at V3, runs V4 to insert legal content
|
||||
3. `curl https://mgmt.directlx.dev/api/v1/legal` — returns all 4 legal sections
|
||||
4. Portal at `/portal/legal` — shows 4 documents in master-detail view
|
||||
|
|
@ -0,0 +1,515 @@
|
|||
# Implementation Plan: ATM Auto-Registration and Management UI
|
||||
|
||||
## Overview
|
||||
|
||||
Implement three key features:
|
||||
1. **Auto-register ATMs** when hiveops-agent communicates for the first time
|
||||
2. **Add "ATM Management" menu** to the frontend navigation
|
||||
3. **Create ATM list view** showing all ATMs with agent connection status
|
||||
4. **Enable manual ATM creation** from the frontend
|
||||
|
||||
## Architecture Decisions
|
||||
|
||||
### Auto-Registration Strategy
|
||||
- Modify existing `POST /api/journal-events` endpoint to accept BOTH database ID (Long) AND agent identifier (String)
|
||||
- When agent identifier is provided, look up or auto-create ATM
|
||||
- Auto-create ATM with defaults if not found: location="Unknown", address="Auto-registered", model="Unknown"
|
||||
- Maintains backward compatibility - existing UI calls using database ID continue to work
|
||||
- `/api/agent` base path reserved for future agent-related functionality (not used in this implementation)
|
||||
|
||||
### Connection Status Tracking
|
||||
- Use last-seen timestamp approach via `AtmProperties.lastHeartbeat` field
|
||||
- Update timestamp on every agent communication (journal events, config sync)
|
||||
- Calculate connection status dynamically:
|
||||
- Connected: lastHeartbeat within 5 minutes
|
||||
- Disconnected: lastHeartbeat older than 5 minutes
|
||||
- Never Connected: lastHeartbeat is null
|
||||
- Add computed `agentConnectionStatus` field to `AtmDTO`
|
||||
|
||||
## Backend Changes
|
||||
|
||||
### 1. Modified and New DTOs
|
||||
|
||||
#### File: `backend/src/main/java/com/hiveops/incident/dto/CreateJournalEventRequest.java` (MODIFY)
|
||||
|
||||
Modify existing DTO to support both database ID and agent identifier:
|
||||
```java
|
||||
@Data
|
||||
@NoArgsConstructor
|
||||
@AllArgsConstructor
|
||||
@Builder
|
||||
public class CreateJournalEventRequest {
|
||||
private Long atmId; // Database ID (for UI/existing integrations)
|
||||
private String agentAtmId; // Agent identifier e.g., "ATM-001" (for hiveops-agent)
|
||||
private Long incidentId;
|
||||
private String eventType;
|
||||
private String eventDetails;
|
||||
private Integer cardReaderSlot;
|
||||
private String cardReaderStatus;
|
||||
private String cassetteType;
|
||||
private Integer cassetteFillLevel;
|
||||
private Integer cassetteBillCount;
|
||||
private String cassetteCurrency;
|
||||
private String eventSource;
|
||||
}
|
||||
```
|
||||
|
||||
**Validation**: Either `atmId` OR `agentAtmId` must be provided (not both, not neither)
|
||||
|
||||
#### File: `backend/src/main/java/com/hiveops/incident/dto/CreateAtmRequest.java` (NEW)
|
||||
```java
|
||||
@Data
|
||||
@NoArgsConstructor
|
||||
@AllArgsConstructor
|
||||
@Builder
|
||||
public class CreateAtmRequest {
|
||||
@NotBlank
|
||||
private String atmId; // Unique identifier
|
||||
@NotBlank
|
||||
private String location;
|
||||
@NotBlank
|
||||
private String address;
|
||||
@NotBlank
|
||||
private String model;
|
||||
private Double latitude;
|
||||
private Double longitude;
|
||||
}
|
||||
```
|
||||
|
||||
#### File: `backend/src/main/java/com/hiveops/incident/dto/AtmDTO.java` (MODIFY)
|
||||
Add two new fields:
|
||||
```java
|
||||
private String agentConnectionStatus; // "CONNECTED", "DISCONNECTED", "NEVER_CONNECTED"
|
||||
private LocalDateTime lastHeartbeat;
|
||||
```
|
||||
|
||||
### 2. Service Layer Changes
|
||||
|
||||
#### File: `backend/src/main/java/com/hiveops/incident/service/AtmService.java` (MODIFY)
|
||||
|
||||
**Add new methods:**
|
||||
```java
|
||||
@Transactional
|
||||
public Atm findOrCreateAtm(String agentAtmId) {
|
||||
return atmRepository.findByAtmId(agentAtmId)
|
||||
.orElseGet(() -> autoRegisterAtm(agentAtmId));
|
||||
}
|
||||
|
||||
@Transactional
|
||||
public void updateLastHeartbeat(Atm atm) {
|
||||
AtmProperties props = atmPropertiesRepository.findByAtmId(atm.getId())
|
||||
.orElseGet(() -> {
|
||||
AtmProperties newProps = new AtmProperties();
|
||||
newProps.setAtm(atm);
|
||||
return atmPropertiesRepository.save(newProps);
|
||||
});
|
||||
props.setLastHeartbeat(LocalDateTime.now());
|
||||
atmPropertiesRepository.save(props);
|
||||
}
|
||||
|
||||
private Atm autoRegisterAtm(String agentAtmId) {
|
||||
Atm atm = Atm.builder()
|
||||
.atmId(agentAtmId)
|
||||
.location("Unknown")
|
||||
.address("Auto-registered - pending configuration")
|
||||
.model("Unknown")
|
||||
.build();
|
||||
|
||||
logger.info("Auto-registering new ATM: {}", agentAtmId);
|
||||
return atmRepository.save(atm);
|
||||
}
|
||||
|
||||
@Transactional
|
||||
public AtmDTO createAtm(CreateAtmRequest request) {
|
||||
if (atmRepository.findByAtmId(request.getAtmId()).isPresent()) {
|
||||
throw new RuntimeException("ATM with ID " + request.getAtmId() + " already exists");
|
||||
}
|
||||
|
||||
Atm atm = Atm.builder()
|
||||
.atmId(request.getAtmId())
|
||||
.location(request.getLocation())
|
||||
.address(request.getAddress())
|
||||
.model(request.getModel())
|
||||
.latitude(request.getLatitude())
|
||||
.longitude(request.getLongitude())
|
||||
.build();
|
||||
|
||||
Atm saved = atmRepository.save(atm);
|
||||
return mapToDto(saved);
|
||||
}
|
||||
|
||||
@Transactional
|
||||
public AtmDTO updateAtm(Long id, CreateAtmRequest request) {
|
||||
Atm atm = atmRepository.findById(id)
|
||||
.orElseThrow(() -> new RuntimeException("ATM not found"));
|
||||
|
||||
// Cannot change atmId (unique identifier)
|
||||
atm.setLocation(request.getLocation());
|
||||
atm.setAddress(request.getAddress());
|
||||
atm.setModel(request.getModel());
|
||||
atm.setLatitude(request.getLatitude());
|
||||
atm.setLongitude(request.getLongitude());
|
||||
|
||||
Atm saved = atmRepository.save(atm);
|
||||
return mapToDto(saved);
|
||||
}
|
||||
|
||||
@Transactional
|
||||
public void deleteAtm(Long id) {
|
||||
Atm atm = atmRepository.findById(id)
|
||||
.orElseThrow(() -> new RuntimeException("ATM not found"));
|
||||
atm.setStatus(AtmStatus.INACTIVE); // Soft delete
|
||||
atmRepository.save(atm);
|
||||
}
|
||||
```
|
||||
|
||||
**Modify existing method `mapToDto(Atm atm)` at line 89:**
|
||||
```java
|
||||
private AtmDTO mapToDto(Atm atm) {
|
||||
// Get lastHeartbeat from properties
|
||||
LocalDateTime lastHeartbeat = atmPropertiesRepository.findByAtmId(atm.getId())
|
||||
.map(AtmProperties::getLastHeartbeat)
|
||||
.orElse(null);
|
||||
|
||||
// Calculate connection status
|
||||
String connectionStatus = calculateConnectionStatus(lastHeartbeat);
|
||||
|
||||
return AtmDTO.builder()
|
||||
.id(atm.getId())
|
||||
.atmId(atm.getAtmId())
|
||||
.location(atm.getLocation())
|
||||
.address(atm.getAddress())
|
||||
.status(atm.getStatus().name())
|
||||
.latitude(atm.getLatitude())
|
||||
.longitude(atm.getLongitude())
|
||||
.model(atm.getModel())
|
||||
.lastServiceDate(atm.getLastServiceDate())
|
||||
.createdAt(atm.getCreatedAt())
|
||||
.updatedAt(atm.getUpdatedAt())
|
||||
.agentConnectionStatus(connectionStatus)
|
||||
.lastHeartbeat(lastHeartbeat)
|
||||
.build();
|
||||
}
|
||||
|
||||
private String calculateConnectionStatus(LocalDateTime lastHeartbeat) {
|
||||
if (lastHeartbeat == null) {
|
||||
return "NEVER_CONNECTED";
|
||||
}
|
||||
|
||||
long minutesAgo = java.time.Duration.between(lastHeartbeat, LocalDateTime.now()).toMinutes();
|
||||
return minutesAgo <= 5 ? "CONNECTED" : "DISCONNECTED";
|
||||
}
|
||||
```
|
||||
|
||||
#### File: `backend/src/main/java/com/hiveops/incident/service/JournalEventService.java` (MODIFY)
|
||||
|
||||
**Add dependency injection in constructor:**
|
||||
```java
|
||||
private final AtmService atmService;
|
||||
```
|
||||
|
||||
**Modify existing `createEvent` method (replace lines 25-46):**
|
||||
```java
|
||||
public JournalEventDTO createEvent(CreateJournalEventRequest request) {
|
||||
Atm atm;
|
||||
|
||||
// Support both database ID and agent identifier
|
||||
if (request.getAgentAtmId() != null && !request.getAgentAtmId().isEmpty()) {
|
||||
// Agent identifier provided - find or auto-register
|
||||
atm = atmService.findOrCreateAtm(request.getAgentAtmId());
|
||||
atmService.updateLastHeartbeat(atm);
|
||||
} else if (request.getAtmId() != null) {
|
||||
// Database ID provided (existing behavior)
|
||||
atm = atmRepository.findById(request.getAtmId())
|
||||
.orElseThrow(() -> new RuntimeException("ATM not found"));
|
||||
} else {
|
||||
throw new RuntimeException("Either atmId or agentAtmId must be provided");
|
||||
}
|
||||
|
||||
JournalEvent event = JournalEvent.builder()
|
||||
.atm(atm)
|
||||
.incident(request.getIncidentId() != null ?
|
||||
incidentRepository.findById(request.getIncidentId()).orElse(null) : null)
|
||||
.eventType(EventType.valueOf(request.getEventType()))
|
||||
.eventDetails(request.getEventDetails())
|
||||
.cardReaderSlot(request.getCardReaderSlot())
|
||||
.cardReaderStatus(request.getCardReaderStatus())
|
||||
.cassetteType(request.getCassetteType())
|
||||
.cassetteFillLevel(request.getCassetteFillLevel())
|
||||
.cassetteBillCount(request.getCassetteBillCount())
|
||||
.cassetteCurrency(request.getCassetteCurrency())
|
||||
.eventSource(request.getEventSource() != null ? request.getEventSource() : "MANUAL")
|
||||
.build();
|
||||
|
||||
JournalEvent saved = journalEventRepository.save(event);
|
||||
return mapToDto(saved);
|
||||
}
|
||||
```
|
||||
|
||||
### 3. Controller Changes
|
||||
|
||||
#### File: `backend/src/main/java/com/hiveops/incident/controller/JournalEventController.java` (NO CHANGES)
|
||||
|
||||
The existing `POST /api/journal-events` endpoint will automatically support both formats through the modified `CreateJournalEventRequest` DTO and updated service logic. No controller changes needed.
|
||||
|
||||
**Endpoint behavior:**
|
||||
- UI sends: `{ "atmId": 123, ... }` - works as before
|
||||
- Agent sends: `{ "agentAtmId": "ATM-001", ... }` - auto-registers if needed
|
||||
|
||||
#### File: `backend/src/main/java/com/hiveops/incident/controller/AtmController.java` (MODIFY)
|
||||
|
||||
Add CRUD endpoints:
|
||||
```java
|
||||
@PostMapping
|
||||
public ResponseEntity<AtmDTO> createAtm(@Valid @RequestBody CreateAtmRequest request) {
|
||||
AtmDTO atm = atmService.createAtm(request);
|
||||
return ResponseEntity.ok(atm);
|
||||
}
|
||||
|
||||
@PutMapping("/{id}")
|
||||
public ResponseEntity<AtmDTO> updateAtm(@PathVariable Long id, @Valid @RequestBody CreateAtmRequest request) {
|
||||
AtmDTO atm = atmService.updateAtm(id, request);
|
||||
return ResponseEntity.ok(atm);
|
||||
}
|
||||
|
||||
@DeleteMapping("/{id}")
|
||||
public ResponseEntity<Void> deleteAtm(@PathVariable Long id) {
|
||||
atmService.deleteAtm(id);
|
||||
return ResponseEntity.noContent().build();
|
||||
}
|
||||
```
|
||||
|
||||
## Frontend Changes
|
||||
|
||||
### 1. API Client Updates
|
||||
|
||||
#### File: `frontend/src/lib/api.ts` (MODIFY)
|
||||
|
||||
Add to `atmAPI` object:
|
||||
```typescript
|
||||
create: (data: Partial<Atm>) => api.post<Atm>('/atms', data),
|
||||
update: (id: number, data: Partial<Atm>) => api.put<Atm>(`/atms/${id}`, data),
|
||||
delete: (id: number) => api.delete(`/atms/${id}`),
|
||||
```
|
||||
|
||||
### 2. New Components
|
||||
|
||||
#### File: `frontend/src/components/AtmManagement/index.ts` (NEW)
|
||||
```typescript
|
||||
export { default } from './AtmManagement.svelte';
|
||||
```
|
||||
|
||||
#### File: `frontend/src/components/AtmManagement/AtmManagement.svelte` (NEW)
|
||||
|
||||
Container component with tabs for List and Create views:
|
||||
```svelte
|
||||
<script lang="ts">
|
||||
import AtmList from './AtmList.svelte';
|
||||
import CreateAtm from './CreateAtm.svelte';
|
||||
|
||||
export let activeTab: 'list' | 'create' = 'list';
|
||||
</script>
|
||||
|
||||
<div class="atm-management">
|
||||
{#if activeTab === 'list'}
|
||||
<AtmList />
|
||||
{:else if activeTab === 'create'}
|
||||
<CreateAtm />
|
||||
{/if}
|
||||
</div>
|
||||
```
|
||||
|
||||
#### File: `frontend/src/components/AtmManagement/AtmList.svelte` (NEW)
|
||||
|
||||
ATM list with connection status indicators:
|
||||
- Table showing: ID, ATM ID, Location, Model, Status, Agent Status, Last Heartbeat, Actions
|
||||
- Connection status dot (green=connected, red=disconnected, gray=never)
|
||||
- Search/filter functionality
|
||||
- Edit/Delete actions
|
||||
- Follow pattern from `IncidentList.svelte`
|
||||
|
||||
Key helper function:
|
||||
```typescript
|
||||
function getConnectionStatus(status: string, lastHeartbeat: string | null) {
|
||||
if (status === 'NEVER_CONNECTED') {
|
||||
return { label: 'Never Connected', color: '#9ca3af', dot: '⚫' };
|
||||
}
|
||||
if (status === 'CONNECTED') {
|
||||
return { label: 'Connected', color: '#10b981', dot: '🟢' };
|
||||
}
|
||||
return { label: 'Disconnected', color: '#ef4444', dot: '🔴' };
|
||||
}
|
||||
```
|
||||
|
||||
#### File: `frontend/src/components/AtmManagement/CreateAtm.svelte` (NEW)
|
||||
|
||||
Manual ATM creation form with fields:
|
||||
- ATM ID (text, required, unique)
|
||||
- Location (text, required)
|
||||
- Address (text, required)
|
||||
- Model (text, required)
|
||||
- Latitude (number, optional)
|
||||
- Longitude (number, optional)
|
||||
|
||||
Follow pattern from `CreateIncident.svelte` with form validation, loading states, and success messages.
|
||||
|
||||
### 3. Navigation Updates
|
||||
|
||||
#### File: `frontend/src/App.svelte` (MODIFY)
|
||||
|
||||
**Add state variables (around line 20-30):**
|
||||
```typescript
|
||||
let atmManagementExpanded = false;
|
||||
let atmManagementTab: 'list' | 'create' = 'list';
|
||||
|
||||
const atmManagementTabs = [
|
||||
{ id: 'list', label: 'ATM List', icon: '📋' },
|
||||
{ id: 'create', label: 'Add ATM', icon: '➕' },
|
||||
];
|
||||
|
||||
function selectAtmManagementTab(tabId: string) {
|
||||
atmManagementTab = tabId as 'list' | 'create';
|
||||
currentView = 'atm-management';
|
||||
}
|
||||
```
|
||||
|
||||
**Add menu section (insert between ATM Properties and Fleet Management, around line 173):**
|
||||
```svelte
|
||||
<div class="nav-group">
|
||||
<button
|
||||
class="nav-btn"
|
||||
class:active={currentView === 'atm-management'}
|
||||
on:click={() => {
|
||||
atmManagementExpanded = !atmManagementExpanded;
|
||||
if (!atmManagementExpanded) {
|
||||
atmManagementTab = 'list';
|
||||
currentView = 'atm-management';
|
||||
}
|
||||
}}
|
||||
>
|
||||
<span class="nav-icon">📱</span>
|
||||
ATM Management
|
||||
<span class="expand-icon" class:expanded={atmManagementExpanded}>
|
||||
{atmManagementExpanded ? '▼' : '▶'}
|
||||
</span>
|
||||
</button>
|
||||
{#if atmManagementExpanded}
|
||||
<div class="submenu">
|
||||
{#each atmManagementTabs as tab}
|
||||
<button
|
||||
class="submenu-btn"
|
||||
class:active={currentView === 'atm-management' && atmManagementTab === tab.id}
|
||||
on:click={() => selectAtmManagementTab(tab.id)}
|
||||
>
|
||||
<span class="submenu-icon">{tab.icon}</span>
|
||||
{tab.label}
|
||||
</button>
|
||||
{/each}
|
||||
</div>
|
||||
{/if}
|
||||
</div>
|
||||
```
|
||||
|
||||
**Add import (around line 10):**
|
||||
```typescript
|
||||
import AtmManagement from './components/AtmManagement';
|
||||
```
|
||||
|
||||
**Add route (in main content section, around line 245):**
|
||||
```svelte
|
||||
{:else if currentView === 'atm-management'}
|
||||
<div class="atm-management-view">
|
||||
<AtmManagement activeTab={atmManagementTab} />
|
||||
</div>
|
||||
```
|
||||
|
||||
## Implementation Sequence
|
||||
|
||||
### Phase 1: Backend Auto-Registration
|
||||
1. Modify `CreateJournalEventRequest.java` to add `agentAtmId` field
|
||||
2. Add `findOrCreateAtm()`, `autoRegisterAtm()`, `updateLastHeartbeat()` to `AtmService`
|
||||
3. Inject `AtmService` into `JournalEventService` constructor
|
||||
4. Modify `createEvent()` method in `JournalEventService` to support both atmId formats
|
||||
5. Test existing `/api/journal-events` endpoint with both formats
|
||||
|
||||
### Phase 2: Backend ATM Management
|
||||
1. Create `CreateAtmRequest.java`
|
||||
2. Modify `AtmDTO` to add `agentConnectionStatus` and `lastHeartbeat` fields
|
||||
3. Add `createAtm()`, `updateAtm()`, `deleteAtm()` to `AtmService`
|
||||
4. Modify `mapToDto()` and add `calculateConnectionStatus()` to `AtmService`
|
||||
5. Add POST, PUT, DELETE endpoints to `AtmController`
|
||||
|
||||
### Phase 3: Frontend Components
|
||||
1. Add `create`, `update`, `delete` methods to `atmAPI` in `api.ts`
|
||||
2. Create `AtmList.svelte` component with connection status display
|
||||
3. Create `CreateAtm.svelte` form component
|
||||
4. Create `AtmManagement.svelte` container component
|
||||
5. Create `index.ts` export file
|
||||
|
||||
### Phase 4: Frontend Navigation
|
||||
1. Add state variables and tab configuration to `App.svelte`
|
||||
2. Add import for `AtmManagement` component
|
||||
3. Add menu section between ATM Properties and Fleet Management
|
||||
4. Add route handler in main content section
|
||||
|
||||
## Critical Files
|
||||
|
||||
**Backend:**
|
||||
- `backend/src/main/java/com/hiveops/incident/service/AtmService.java`
|
||||
- `backend/src/main/java/com/hiveops/incident/service/JournalEventService.java`
|
||||
- `backend/src/main/java/com/hiveops/incident/controller/AtmController.java`
|
||||
- `backend/src/main/java/com/hiveops/incident/dto/CreateJournalEventRequest.java`
|
||||
- `backend/src/main/java/com/hiveops/incident/dto/AtmDTO.java`
|
||||
|
||||
**Frontend:**
|
||||
- `frontend/src/App.svelte`
|
||||
- `frontend/src/lib/api.ts`
|
||||
- `frontend/src/components/AtmManagement/` (new directory)
|
||||
|
||||
## Verification
|
||||
|
||||
### Test Auto-Registration
|
||||
```bash
|
||||
# Send agent journal event with new ATM (using agent identifier)
|
||||
curl -X POST http://localhost:8080/api/journal-events \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"agentAtmId": "ATM-001",
|
||||
"eventType": "CARD_READER_DETECTED",
|
||||
"eventDetails": "Card reader initialized",
|
||||
"eventSource": "HIVEOPS_AGENT"
|
||||
}'
|
||||
|
||||
# Verify ATM was auto-created
|
||||
curl http://localhost:8080/api/atms/search?query=ATM-001
|
||||
|
||||
# Test backward compatibility (using database ID)
|
||||
curl -X POST http://localhost:8080/api/journal-events \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"atmId": 1,
|
||||
"eventType": "CASSETTE_LOW",
|
||||
"eventDetails": "Cassette running low",
|
||||
"eventSource": "MANUAL"
|
||||
}'
|
||||
```
|
||||
|
||||
### Test Manual ATM Creation
|
||||
1. Navigate to ATM Management > Add ATM
|
||||
2. Fill form with: atmId="ATM-002", location="New York", address="123 Main St", model="Hyosung 2700"
|
||||
3. Submit and verify ATM appears in list
|
||||
4. Check connection status shows "Never Connected" (gray)
|
||||
|
||||
### Test Connection Status
|
||||
1. Send journal event from existing ATM
|
||||
2. Refresh ATM list
|
||||
3. Verify connection status changes to "Connected" (green)
|
||||
4. Wait 6 minutes
|
||||
5. Verify status changes to "Disconnected" (red)
|
||||
|
||||
### Test Edit/Delete
|
||||
1. Click edit on an ATM
|
||||
2. Modify location and address
|
||||
3. Save and verify changes
|
||||
4. Click delete
|
||||
5. Verify ATM status changes to INACTIVE
|
||||
|
|
@ -0,0 +1,152 @@
|
|||
# Global Typography System
|
||||
|
||||
## Summary
|
||||
Create CSS custom properties for all font sizes, defined in `app.css` `:root`, derived from the sidebar's sizing hierarchy. Replace all hardcoded font-size values across every component CSS file and scoped `<style>` block with the variables.
|
||||
|
||||
## Sidebar Reference (the baseline)
|
||||
- Sidebar header (h1): `1.4rem`
|
||||
- Nav item text: `0.95rem`
|
||||
- Submenu item text: `0.85rem`
|
||||
- Nav icon: `1.1rem`
|
||||
- Submenu icon: `0.9rem`
|
||||
- Expand icon: `0.7rem`
|
||||
- Footer text: `0.85rem`
|
||||
|
||||
## CSS Variables (in `app.css :root`)
|
||||
|
||||
```css
|
||||
/* Typography scale */
|
||||
--font-size-page-title: 1.4rem; /* h1 page headings (matches sidebar header) */
|
||||
--font-size-section-title: 1.1rem; /* h2 panel/section headings */
|
||||
--font-size-card-title: 0.95rem; /* h3 card titles, stat labels */
|
||||
--font-size-body: 0.95rem; /* default body/nav text */
|
||||
--font-size-body-sm: 0.9rem; /* secondary text, timestamps, dropdowns */
|
||||
--font-size-label: 0.85rem; /* labels, submenu, badges, small text */
|
||||
--font-size-caption: 0.8rem; /* table headers, bar labels */
|
||||
--font-size-tiny: 0.75rem; /* expand icons, bar values */
|
||||
--font-size-stat-value: 2rem; /* large dashboard stat numbers */
|
||||
--font-size-icon: 1.1rem; /* icons */
|
||||
--font-size-icon-sm: 0.9rem; /* small icons */
|
||||
--font-size-subtitle: 0.85rem; /* header subtitle/description */
|
||||
```
|
||||
|
||||
## Files to Modify
|
||||
|
||||
### 1. `frontend/src/app.css`
|
||||
- Add all `--font-size-*` variables to `:root`
|
||||
- Fix duplicate font-family (remove from app.css, keep App.svelte's)
|
||||
|
||||
### 2. `frontend/src/App.svelte` (style block)
|
||||
- Global `.header h1/h2`: `28px` → `var(--font-size-page-title)`
|
||||
- Global `.header p`: `14px` → `var(--font-size-subtitle)`
|
||||
- `.sidebar-header h1`: `1.4rem` → `var(--font-size-page-title)`
|
||||
- `.nav-btn`: `0.95rem` → `var(--font-size-body)`
|
||||
- `.nav-icon`: `1.1rem` → `var(--font-size-icon)`
|
||||
- `.expand-icon`: `0.7rem` → `var(--font-size-tiny)`
|
||||
- `.submenu-btn`: `0.85rem` → `var(--font-size-label)`
|
||||
- `.submenu-icon`: `0.9rem` → `var(--font-size-icon-sm)`
|
||||
- `.main-footer p`: `0.85rem` → `var(--font-size-label)`
|
||||
|
||||
### 3. `frontend/src/components/Dashboard/Dashboard.css`
|
||||
- `.header h1`: `28px` → `var(--font-size-page-title)`
|
||||
- `.header p`: `14px` → `var(--font-size-subtitle)`
|
||||
- `.loading`: `1.1rem` → `var(--font-size-section-title)`
|
||||
- `.stat-card h3`: `0.95rem` → `var(--font-size-card-title)`
|
||||
- `.stat-value`: `2.5rem` → `var(--font-size-stat-value)`
|
||||
- `.stat-percentage`: `0.9rem` → `var(--font-size-body-sm)`
|
||||
- `.panel h2`: `1.1rem` → `var(--font-size-section-title)`
|
||||
- `.bar-value`: `0.75rem` → `var(--font-size-tiny)`
|
||||
- `.bar-label`: `0.8rem` → `var(--font-size-caption)`
|
||||
- `.metrics-table th`: `0.85rem` → `var(--font-size-label)`
|
||||
- `.region-card h3`: `0.95rem` → `var(--font-size-card-title)`
|
||||
- `.region-stat .stat-label`: `0.7rem` → `var(--font-size-tiny)`
|
||||
- `.region-stat .stat-number`: `1rem` → `var(--font-size-body)`
|
||||
- Media query `.stat-value`: `2rem` → `var(--font-size-stat-value)`
|
||||
|
||||
### 4. `frontend/src/components/Incident/IncidentList/IncidentList.css`
|
||||
- `.header h1`: `28px` → `var(--font-size-page-title)`
|
||||
- `.header p`: `14px` → `var(--font-size-subtitle)`
|
||||
- `.filter-group label`: `1rem` → `var(--font-size-body)`
|
||||
- `.filter-group select`: `1rem` → `var(--font-size-body)`
|
||||
- `.stats span`: `0.9rem` → `var(--font-size-body-sm)`
|
||||
- `.loading`: `1.1rem` → `var(--font-size-section-title)`
|
||||
- `.incidents-table th`: `0.8rem` → `var(--font-size-caption)`
|
||||
- `.col-created`: `0.9rem` → `var(--font-size-body-sm)`
|
||||
- `.severity-badge`: `0.85rem` → `var(--font-size-label)`
|
||||
- `.status-badge`: `0.85rem` → `var(--font-size-label)`
|
||||
- `.expand-btn`: `0.75rem` → `var(--font-size-tiny)`
|
||||
- `.action-select`: `0.9rem` → `var(--font-size-body-sm)`
|
||||
- `.history-btn`: `1.1rem` → `var(--font-size-icon)`
|
||||
- `.card-title`: `0.9rem` → `var(--font-size-body-sm)`
|
||||
- `.field-label`: `0.85rem` → `var(--font-size-label)`
|
||||
- `.field-value`: `0.95rem` → `var(--font-size-body)`
|
||||
- `.helpdesk-name-large`: `1rem` → `var(--font-size-body)`
|
||||
- `.helpdesk-department`: `0.8rem` → `var(--font-size-caption)`
|
||||
- `.tech-name-large`: `1rem` → `var(--font-size-body)`
|
||||
- `.tech-status`: `0.8rem` → `var(--font-size-caption)`
|
||||
- `.description-text`: `0.95rem` → `var(--font-size-body)`
|
||||
- Media query values updated accordingly
|
||||
|
||||
### 5. `frontend/src/components/Incident/AtmHistory/AtmHistory.css`
|
||||
- `.header h1`: `28px` → `var(--font-size-page-title)`
|
||||
- `.header p`: `14px` → `var(--font-size-subtitle)`
|
||||
- `.filter-group label`: `0.85rem` → `var(--font-size-label)`
|
||||
- `.filter-group select`: `0.9rem` → `var(--font-size-body-sm)`
|
||||
- `.incidents-table th`: `0.8rem` → `var(--font-size-caption)`
|
||||
- `.col-created`: `0.85rem` → `var(--font-size-label)`
|
||||
- `.col-resolved`: `0.85rem` → `var(--font-size-label)`
|
||||
- `.severity-badge/.status-badge`: `0.75rem` → `var(--font-size-label)` (fix inconsistency — was smaller than IncidentList)
|
||||
- `.no-selection p`: `1.1rem` → `var(--font-size-section-title)`
|
||||
- `.no-data .hint`: `0.85rem` → `var(--font-size-label)`
|
||||
- Media query values updated accordingly
|
||||
|
||||
### 6. `frontend/src/components/JournalEvents/JournalEvents.css`
|
||||
- `.header h1`: `28px` → `var(--font-size-page-title)`
|
||||
- `.header p`: `14px` → `var(--font-size-subtitle)`
|
||||
- `.filter-group select`: `1rem` → `var(--font-size-body)`
|
||||
- `.event-icon`: `1.5rem` (keep as-is — special large icon, not in scale)
|
||||
- `.event-time`: `0.9rem` → `var(--font-size-body-sm)`
|
||||
- `.event-meta`: `0.9rem` → `var(--font-size-body-sm)`
|
||||
- `.event-source`: `0.85rem` → `var(--font-size-label)`
|
||||
|
||||
### 7. `frontend/src/components/Incident/CreateIncident/CreateIncident.css`
|
||||
- `.btn-create`: `1rem` → `var(--font-size-body)`
|
||||
- `.modal-header h2`: `1.5rem` → `var(--font-size-page-title)` (was inconsistent)
|
||||
- `.close-btn`: `1.5rem` (keep — decorative close X)
|
||||
- Form inputs: `1rem` → `var(--font-size-body)`
|
||||
- Buttons: `1rem` → `var(--font-size-body)`
|
||||
|
||||
### 8. `frontend/src/components/AtmProperties/AtmProperties.svelte` (scoped style)
|
||||
- Map all `px` font sizes to variables (28px→page-title, 20px→section-title area, 14px→subtitle, 13px→label, 12px→caption area)
|
||||
- Keep special large sizes like 48px/32px icons as-is
|
||||
|
||||
### 9. `frontend/src/components/Workflow/IncidentWorkflow.svelte` (scoped style)
|
||||
- Map all font-size values to variables following same pattern
|
||||
|
||||
### 10. `frontend/src/components/common/MultiSelectDropdown.svelte` (scoped style)
|
||||
- `.msd-toggle`: `1rem` → `var(--font-size-body)`
|
||||
- `.msd-item`: `0.9rem` → `var(--font-size-body-sm)`
|
||||
- `.msd-arrow`: `0.75rem` → `var(--font-size-tiny)`
|
||||
|
||||
### 11. `frontend/src/components/common/AtmInfoCard.svelte` (scoped style)
|
||||
- `.atm-info-label`: `0.7rem` → `var(--font-size-tiny)`
|
||||
- `.atm-info-value`: `0.95rem` → `var(--font-size-body)`
|
||||
- `.card-status-badge`: `0.75rem` → `var(--font-size-tiny)`
|
||||
|
||||
### 12. `frontend/src/components/common/AtmSelector.svelte` (scoped style)
|
||||
- `.selector-label`: `14px` → `var(--font-size-subtitle)`
|
||||
- `.atm-select`: `14px` → `var(--font-size-subtitle)`
|
||||
- `.position-indicator`: `12px` → `var(--font-size-caption)` approx
|
||||
- Compact variants: `13px`/`14px` → corresponding variables
|
||||
|
||||
## Key normalizations (fixes inconsistencies)
|
||||
- Page title h1: was `28px` everywhere → now `1.4rem` (matching sidebar header)
|
||||
- AtmHistory badges: were `0.75rem` (smaller) → now `var(--font-size-label)` = `0.85rem` (same as IncidentList)
|
||||
- Filter labels: were `0.85rem` in AtmHistory vs `1rem` in IncidentList → both get `var(--font-size-label)` = `0.85rem`
|
||||
- Modal h2: was `1.5rem` → `var(--font-size-page-title)` = `1.4rem`
|
||||
- Font-family: remove duplicate from `app.css`, keep App.svelte global
|
||||
|
||||
## Verification
|
||||
- `cd frontend && npm run build` — build must succeed
|
||||
- Visual check: all text should look consistent across pages
|
||||
- Sidebar text sizes unchanged (just using variables now)
|
||||
|
|
@ -0,0 +1,75 @@
|
|||
# Plan: Thymeleaf UI for Dynamic API Server
|
||||
|
||||
## Context
|
||||
The app is currently REST-only. The user wants a browser-accessible UI covering:
|
||||
- Generate API key
|
||||
- List/view all keys
|
||||
- Revoke a key
|
||||
- Test dynamic endpoints
|
||||
|
||||
Since the app is stateless (no sessions), the UI controller will call the service layer directly (bypassing REST auth), with UI paths made public in SecurityConfiguration.
|
||||
|
||||
---
|
||||
|
||||
## Files to Create/Modify
|
||||
|
||||
### 1. `pom.xml` — Add dependencies
|
||||
- `spring-boot-starter-thymeleaf`
|
||||
- `bootstrap` (WebJar, 5.3.x)
|
||||
- `webjars-locator-lite` (auto-resolves WebJar versions in templates)
|
||||
|
||||
### 2. `SecurityConfiguration.java` — Add public UI paths
|
||||
Add `/ui/**` to the permit-all list so browser requests don't need an `X-API-Key` header.
|
||||
|
||||
### 3. `ApiKeyService.java` — Add `listAllKeys()` method
|
||||
Returns `List<ApiKey>` via `apiKeyRepository.findAll()` (already available from JpaRepository).
|
||||
|
||||
### 4. `UiController.java` (new) — `com/dynamicapi/controller/UiController.java`
|
||||
Mapped to `/ui/**`. Injects `ApiKeyService` and `DynamicApiConfigLoader` directly.
|
||||
|
||||
Endpoints:
|
||||
- `GET /ui/` → `index.html` (generate key form)
|
||||
- `POST /ui/generate` → calls `apiKeyService.generateApiKey()`, redirects to index with result
|
||||
- `GET /ui/keys` → `keys.html` (list all keys)
|
||||
- `POST /ui/revoke/{key}` → calls `apiKeyService.revokeApiKey()`, redirects to keys page
|
||||
- `GET /ui/test` → `test.html` (test endpoint form, loads endpoint list)
|
||||
- `POST /ui/test` → makes RestTemplate call to selected endpoint, returns result to test.html
|
||||
|
||||
### 5. Templates (new) — `src/main/resources/templates/`
|
||||
|
||||
**`fragments/layout.html`** — Bootstrap 5 base layout with navbar (links: Home, API Keys, Test Endpoints)
|
||||
|
||||
**`index.html`** — API Key generation
|
||||
- Form: clientName (required), description (optional) → POST `/ui/generate`
|
||||
- On success: shows generated key in a highlighted alert box with copy button (JS)
|
||||
- On error: shows validation/duplicate error message
|
||||
|
||||
**`keys.html`** — Key management table
|
||||
- Table columns: Client Name, Key (masked, last 8 chars shown), Created At, Last Used, Status (Active/Revoked), Actions
|
||||
- Revoke button → POST `/ui/revoke/{key}` (only shown for active keys)
|
||||
- Flash message on successful revoke
|
||||
|
||||
**`test.html`** — Endpoint tester
|
||||
- Dropdown: select from registered dynamic endpoints (path + method)
|
||||
- Input: API Key (required to call protected endpoints)
|
||||
- Submit → POST `/ui/test` → displays JSON response in `<pre>` block
|
||||
|
||||
---
|
||||
|
||||
## Implementation Notes
|
||||
|
||||
- Use Bootstrap 5 via WebJar (no CDN required)
|
||||
- Thymeleaf fragment includes for navbar (`th:replace`)
|
||||
- Flash attributes (`RedirectAttributes`) for post-redirect-get pattern on generate/revoke
|
||||
- RestTemplate in UiController for the endpoint tester (reuse existing pattern from DynamicApiController)
|
||||
- The endpoint tester calls `http://localhost:8080/api/dynamic/{path}` with the user-supplied API key
|
||||
|
||||
---
|
||||
|
||||
## Verification
|
||||
1. `mvn spring-boot:run` compiles and starts cleanly
|
||||
2. `GET http://localhost:8080/api/ui/` — renders generate key form
|
||||
3. Submit form → key displayed on page
|
||||
4. `GET http://localhost:8080/api/ui/keys` — lists all keys
|
||||
5. Revoke a key → table reflects revoked status
|
||||
6. `GET http://localhost:8080/api/ui/test` — dropdown shows all dynamic endpoints, submit returns JSON response
|
||||
|
|
@ -0,0 +1,192 @@
|
|||
# Fix Agent Filtering on ATM List (Issue #7)
|
||||
|
||||
## Context
|
||||
|
||||
The ATM list in HiveOps Incident Management has three filter categories: Status, Model, and Agent Connection. The Agent Connection filter UI (Connected/Disconnected/Never Connected) is fully implemented but **does not actually filter the results**. Users can select filter options, but all ATMs continue to be displayed regardless of the selection.
|
||||
|
||||
**Root Cause:** The `agentFilter` variable exists in the frontend component but is never sent to the backend API. The backend also lacks the implementation to filter by agent connection status.
|
||||
|
||||
Agent connection status is calculated based on the `lastHeartbeat` timestamp in the `AtmProperties` table:
|
||||
- **NEVER_CONNECTED**: `lastHeartbeat` is null
|
||||
- **CONNECTED**: `lastHeartbeat` is within the last 5 minutes
|
||||
- **DISCONNECTED**: `lastHeartbeat` is older than 5 minutes
|
||||
|
||||
## Implementation Plan
|
||||
|
||||
### 1. Add Bidirectional Relationship in Atm Entity
|
||||
|
||||
**File:** `/source/hiveops-src/hiveops-incident/backend/src/main/java/com/hiveops/incident/entity/Atm.java`
|
||||
|
||||
Add `@OneToOne` relationship to `AtmProperties` after line 38:
|
||||
|
||||
```java
|
||||
@OneToOne(mappedBy = "atm", fetch = FetchType.LAZY)
|
||||
private AtmProperties atmProperties;
|
||||
|
||||
public AtmProperties getAtmProperties() { return atmProperties; }
|
||||
public void setAtmProperties(AtmProperties atmProperties) { this.atmProperties = atmProperties; }
|
||||
```
|
||||
|
||||
This enables JPA Criteria API joins in the specification.
|
||||
|
||||
### 2. Implement Agent Status Filtering in Specification
|
||||
|
||||
**File:** `/source/hiveops-src/hiveops-incident/backend/src/main/java/com/hiveops/incident/specification/AtmSpecification.java`
|
||||
|
||||
Update `withFilters()` method signature (line 13) to accept `agentStatuses` parameter and implement time-based filtering logic:
|
||||
|
||||
```java
|
||||
public static Specification<Atm> withFilters(String search, List<String> statuses,
|
||||
List<String> models, List<String> agentStatuses) {
|
||||
return (root, query, cb) -> {
|
||||
List<Predicate> predicates = new ArrayList<>();
|
||||
|
||||
// ... existing search, status, model filtering ...
|
||||
|
||||
// NEW: Agent connection status filtering
|
||||
if (agentStatuses != null && !agentStatuses.isEmpty()) {
|
||||
var propsJoin = root.join("atmProperties", jakarta.persistence.criteria.JoinType.LEFT);
|
||||
List<Predicate> agentPredicates = new ArrayList<>();
|
||||
LocalDateTime now = LocalDateTime.now();
|
||||
LocalDateTime fiveMinutesAgo = now.minusMinutes(5);
|
||||
|
||||
for (String agentStatus : agentStatuses) {
|
||||
switch (agentStatus.toLowerCase()) {
|
||||
case "connected":
|
||||
agentPredicates.add(cb.and(
|
||||
cb.isNotNull(propsJoin.get("lastHeartbeat")),
|
||||
cb.greaterThan(propsJoin.get("lastHeartbeat"), fiveMinutesAgo)
|
||||
));
|
||||
break;
|
||||
case "disconnected":
|
||||
agentPredicates.add(cb.and(
|
||||
cb.isNotNull(propsJoin.get("lastHeartbeat")),
|
||||
cb.lessThanOrEqualTo(propsJoin.get("lastHeartbeat"), fiveMinutesAgo)
|
||||
));
|
||||
break;
|
||||
case "never_connected":
|
||||
agentPredicates.add(cb.isNull(propsJoin.get("lastHeartbeat")));
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
if (!agentPredicates.isEmpty()) {
|
||||
predicates.add(cb.or(agentPredicates.toArray(new Predicate[0])));
|
||||
}
|
||||
}
|
||||
|
||||
return cb.and(predicates.toArray(new Predicate[0]));
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
Add import: `import java.time.LocalDateTime;`
|
||||
|
||||
### 3. Update Controller to Accept Agent Status Parameter
|
||||
|
||||
**File:** `/source/hiveops-src/hiveops-incident/backend/src/main/java/com/hiveops/incident/controller/AtmController.java`
|
||||
|
||||
Update method signature (lines 41-48) to add `agentStatus` parameter:
|
||||
|
||||
```java
|
||||
@GetMapping("/paginated")
|
||||
public ResponseEntity<Page<AtmDTO>> getAllAtmsPaginated(
|
||||
@RequestParam(required = false) String search,
|
||||
@RequestParam(required = false) List<String> status,
|
||||
@RequestParam(required = false) List<String> model,
|
||||
@RequestParam(required = false) List<String> agentStatus,
|
||||
Pageable pageable) {
|
||||
return ResponseEntity.ok(atmService.getAllAtmsPaginated(search, status, model, agentStatus, pageable));
|
||||
}
|
||||
```
|
||||
|
||||
### 4. Update Service Layer
|
||||
|
||||
**File:** `/source/hiveops-src/hiveops-incident/backend/src/main/java/com/hiveops/incident/service/AtmService.java`
|
||||
|
||||
Update method signature (lines 66-69) to pass `agentStatuses` to specification:
|
||||
|
||||
```java
|
||||
public Page<AtmDTO> getAllAtmsPaginated(String search, List<String> statuses,
|
||||
List<String> models, List<String> agentStatuses,
|
||||
Pageable pageable) {
|
||||
Specification<Atm> spec = AtmSpecification.withFilters(search, statuses, models, agentStatuses);
|
||||
return atmRepository.findAll(spec, pageable).map(this::mapToDto);
|
||||
}
|
||||
```
|
||||
|
||||
### 5. Update Frontend API Interface
|
||||
|
||||
**File:** `/source/hiveops-src/hiveops-incident/frontend/src/lib/api.ts`
|
||||
|
||||
Add `agentStatus` parameter to API interface (lines 156-157):
|
||||
|
||||
```typescript
|
||||
getPaginated: (params: PaginationParams & {
|
||||
search?: string;
|
||||
status?: string[];
|
||||
model?: string[];
|
||||
agentStatus?: string[]
|
||||
}) =>
|
||||
```
|
||||
|
||||
### 6. Send Agent Filter from Frontend
|
||||
|
||||
**File:** `/source/hiveops-src/hiveops-incident/frontend/src/components/AtmManagement/AtmList.svelte`
|
||||
|
||||
**Change 1:** Update `fetchAtms()` function (lines 32-44) to send agentFilter:
|
||||
|
||||
```typescript
|
||||
function fetchAtms() {
|
||||
const params: any = {
|
||||
page: currentPage,
|
||||
size: pageSize,
|
||||
sort: `${sortField},${sortDirection}`,
|
||||
};
|
||||
if (searchQuery) params.search = searchQuery;
|
||||
const statuses = [...statusFilter].map(s => s.toUpperCase());
|
||||
if (statuses.length > 0) params.status = statuses;
|
||||
const models = [...modelFilter];
|
||||
if (models.length > 0) params.model = models;
|
||||
const agentStatuses = [...agentFilter]; // NEW
|
||||
if (agentStatuses.length > 0) params.agentStatus = agentStatuses; // NEW
|
||||
loadAtmsPaginated(params);
|
||||
}
|
||||
```
|
||||
|
||||
**Change 2:** Update reactive statement (line 146) to re-fetch on agent filter change:
|
||||
|
||||
```typescript
|
||||
$: if (statusFilter || modelFilter || agentFilter) {
|
||||
```
|
||||
|
||||
## Verification
|
||||
|
||||
### Manual Testing
|
||||
1. Start backend and frontend servers
|
||||
2. Navigate to ATM List page
|
||||
3. Test single agent filter: Select "Connected" only → verify only ATMs with heartbeat ≤ 5 min show
|
||||
4. Test multiple selections: Select "Connected" + "Disconnected" → verify all ATMs except never_connected show
|
||||
5. Test "Never Connected" → verify only ATMs with null lastHeartbeat show
|
||||
6. Test combined filters: Select agent filter + status filter → verify both filters apply
|
||||
7. Verify pagination works with agent filter active
|
||||
8. Verify "Clear all" button clears agent filter
|
||||
9. Check browser console for API requests → confirm `agentStatus` parameter is sent
|
||||
|
||||
### Backend Testing
|
||||
Check database query performance with agent filtering:
|
||||
```sql
|
||||
-- Verify query plan includes join to atm_properties
|
||||
EXPLAIN SELECT * FROM atms
|
||||
LEFT JOIN atm_properties ON atms.id = atm_properties.atm_id
|
||||
WHERE atm_properties.last_heartbeat > NOW() - INTERVAL 5 MINUTE;
|
||||
```
|
||||
|
||||
## Files to Modify
|
||||
|
||||
1. `/source/hiveops-src/hiveops-incident/backend/src/main/java/com/hiveops/incident/entity/Atm.java`
|
||||
2. `/source/hiveops-src/hiveops-incident/backend/src/main/java/com/hiveops/incident/specification/AtmSpecification.java`
|
||||
3. `/source/hiveops-src/hiveops-incident/backend/src/main/java/com/hiveops/incident/controller/AtmController.java`
|
||||
4. `/source/hiveops-src/hiveops-incident/backend/src/main/java/com/hiveops/incident/service/AtmService.java`
|
||||
5. `/source/hiveops-src/hiveops-incident/frontend/src/lib/api.ts`
|
||||
6. `/source/hiveops-src/hiveops-incident/frontend/src/components/AtmManagement/AtmList.svelte`
|
||||
|
|
@ -0,0 +1,58 @@
|
|||
# Plan: Browser Update Check on Startup
|
||||
|
||||
## Overview
|
||||
Add a startup version check that queries the release server for the latest browser version. If a newer version exists, show a dialog offering to download it (same UX as agent download). Also add browser routes to the release server (separate Gitea repo).
|
||||
|
||||
## Part 1: Release Server — Add Browser Routes
|
||||
|
||||
### New file: `hiveops-release/src/routes/browser.js`
|
||||
Mirror `routes/agent.js` but for a separate browser Gitea repo:
|
||||
- Env var: `GITEA_BROWSER_REPO` (default: `hiveops-browser`), reuses existing `GITEA_URL`, `GITEA_TOKEN`, `GITEA_OWNER`
|
||||
- Asset mapping: `windows` → `.exe`, `linux` → `.AppImage`
|
||||
- Endpoints:
|
||||
- `GET /browser/latest` — returns `{ version, name, published, platforms }`
|
||||
- `GET /browser/download?platform={windows|linux}&version={optional}` — streams binary
|
||||
- `GET /browser/versions` — lists all versions
|
||||
- Own release cache (separate from agent)
|
||||
|
||||
### Modify: `hiveops-release/src/index.js`
|
||||
- Import and mount `browserRoutes` alongside `agentRoutes`
|
||||
|
||||
## Part 2: Browser — API Client Methods
|
||||
|
||||
### Modify: `hiveops-browser/src/main/api-client.js`
|
||||
Add three methods:
|
||||
- `checkForBrowserUpdate()` — `GET ${releaseUrl}/browser/latest`, compare version to `app.getVersion()`, return `{ updateAvailable, latestVersion, currentVersion }`
|
||||
- `downloadBrowser(platform, savePath, onProgress, abortSignal)` — same pattern as `downloadAgent` but hits `/browser/download`
|
||||
- `getBrowserFilename(platform)` — HEAD to `/browser/download` for Content-Disposition filename
|
||||
|
||||
## Part 3: Browser — Startup Check + Download Flow
|
||||
|
||||
### Modify: `hiveops-browser/src/main/main.js`
|
||||
In `checkServicesAndStart()`, after main window is created (~line 197):
|
||||
1. Call `apiClient.checkForBrowserUpdate()` (non-blocking, won't delay startup)
|
||||
2. If update available, show dialog: "New version v{latest} available. You are running v{current}. Download?"
|
||||
- Buttons: **Download** / **Later**
|
||||
3. On "Download":
|
||||
- Auto-detect platform via `os.platform()` (no picker needed)
|
||||
- Get server filename via `getBrowserFilename()`
|
||||
- Show save dialog → download with progress window (reuse existing `openDownloadProgress`)
|
||||
4. On "Later": dismiss silently
|
||||
5. Entire check wrapped in try/catch — failures logged, never block the app
|
||||
|
||||
## Files Changed
|
||||
|
||||
| File | Action |
|
||||
|------|--------|
|
||||
| `hiveops-release/src/routes/browser.js` | **New** |
|
||||
| `hiveops-release/src/index.js` | **Modify** — mount browser routes |
|
||||
| `hiveops-browser/src/main/api-client.js` | **Modify** — 3 new methods |
|
||||
| `hiveops-browser/src/main/main.js` | **Modify** — update check + download flow |
|
||||
|
||||
## Verification
|
||||
1. `GET /browser/latest` returns version info from release server
|
||||
2. Browser logs show update check on startup
|
||||
3. If newer version exists, update dialog appears after main window loads
|
||||
4. "Download" → save dialog with server filename → progress → completion
|
||||
5. "Later" → dismissed, app continues
|
||||
6. Release server unreachable → app starts normally, error logged silently
|
||||
|
|
@ -0,0 +1,169 @@
|
|||
# HiveOps Portal UI Implementation Plan
|
||||
|
||||
## Overview
|
||||
Add a Thymeleaf portal UI accessible at `mgmt.directlx.dev/portal` for managing:
|
||||
- **Customers** (separate from Users/admins)
|
||||
- **Product Key Licenses** linked to customers
|
||||
- **API Keys** for hiveops-agent (with configurable scopes)
|
||||
- **Global Configuration**
|
||||
|
||||
## Design Decisions
|
||||
- **Auth**: Session-based for portal (separate from JWT API)
|
||||
- **UI**: Bootstrap 5 (via CDN)
|
||||
- **Customer Model**: Separate entity (Users are admins, Customers get licenses)
|
||||
- **API Key Scope**: Full agent operations with configurable permissions
|
||||
|
||||
---
|
||||
|
||||
## Implementation Phases
|
||||
|
||||
### Phase 1: Dependencies & Database
|
||||
|
||||
**1.1 Update pom.xml** - Add dependencies:
|
||||
```xml
|
||||
<dependency>
|
||||
<groupId>org.springframework.boot</groupId>
|
||||
<artifactId>spring-boot-starter-thymeleaf</artifactId>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.thymeleaf.extras</groupId>
|
||||
<artifactId>thymeleaf-extras-springsecurity6</artifactId>
|
||||
</dependency>
|
||||
```
|
||||
|
||||
**1.2 Create migration** `V3__add_customer_and_apikey_tables.sql`:
|
||||
- `customers` table (name, email, company, phone, status, metadata)
|
||||
- `api_keys` table (name, key_prefix, key_hash, scopes, customer_id, status, expires_at)
|
||||
- Add `customer_id` FK to `licenses` table
|
||||
|
||||
### Phase 2: Entities & Repositories
|
||||
|
||||
**2.1 Create entities:**
|
||||
- `Customer.java` - name, email, company, phone, address, notes, status (ACTIVE/SUSPENDED/ARCHIVED)
|
||||
- `ApiKey.java` - name, keyPrefix, keyHash, scopes (JSON), status, expiresAt, lastUsedAt
|
||||
|
||||
**2.2 Update `License.java`** - Add `@ManyToOne Customer customer`
|
||||
|
||||
**2.3 Create repositories:**
|
||||
- `CustomerRepository.java` - findByUuid, findByEmail, search()
|
||||
- `ApiKeyRepository.java` - findByKeyPrefix, findActiveByKeyPrefix
|
||||
|
||||
### Phase 3: Services
|
||||
|
||||
**3.1 Create services:**
|
||||
- `CustomerService.java` - CRUD, search, suspend/activate
|
||||
- `ApiKeyService.java` - generate (returns raw key once), validate, revoke
|
||||
- `PortalLicenseService.java` - create license for customer, generate key
|
||||
- `DashboardService.java` - statistics aggregation
|
||||
|
||||
**API Key format**: `hiveops_{8-char-prefix}_{32-char-random}`
|
||||
- Only SHA-256 hash stored; raw key shown once at creation
|
||||
|
||||
### Phase 4: Security Configuration
|
||||
|
||||
**4.1 Create `ApiKeyAuthenticationFilter.java`**
|
||||
- Process `X-API-Key` header for agent requests
|
||||
- Validate against hashed keys in DB
|
||||
|
||||
**4.2 Update `SecurityConfig.java`** - Dual filter chains:
|
||||
```java
|
||||
// Order 1: API chain (JWT, stateless)
|
||||
@Bean @Order(1)
|
||||
SecurityFilterChain apiFilterChain() { ... }
|
||||
|
||||
// Order 2: Portal chain (session-based)
|
||||
@Bean @Order(2)
|
||||
SecurityFilterChain portalFilterChain() {
|
||||
// Form login at /portal/login
|
||||
// Session management
|
||||
// CSRF enabled
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 5: Portal Controllers
|
||||
|
||||
Create in `com.hiveops.mgmt.controller.portal`:
|
||||
|
||||
| Controller | Routes | Function |
|
||||
|------------|--------|----------|
|
||||
| `PortalDashboardController` | `/portal/`, `/portal/dashboard`, `/portal/login` | Dashboard, login |
|
||||
| `PortalCustomerController` | `/portal/customers/**` | Customer CRUD |
|
||||
| `PortalLicenseController` | `/portal/licenses/**` | License management |
|
||||
| `PortalApiKeyController` | `/portal/api-keys/**` | API key management |
|
||||
| `PortalSettingsController` | `/portal/settings/**` | Global settings (admin only) |
|
||||
|
||||
### Phase 6: Templates & Static Resources
|
||||
|
||||
**Directory structure:**
|
||||
```
|
||||
src/main/resources/
|
||||
├── templates/portal/
|
||||
│ ├── layout/base.html
|
||||
│ ├── fragments/ (header, sidebar, footer, alerts, pagination)
|
||||
│ ├── login.html
|
||||
│ ├── dashboard.html
|
||||
│ ├── customers/ (list, form, view)
|
||||
│ ├── licenses/ (list, form, view)
|
||||
│ ├── api-keys/ (list, form, created)
|
||||
│ ├── settings/list.html
|
||||
│ └── error/ (403, 404, 500)
|
||||
└── static/portal/
|
||||
├── css/styles.css
|
||||
├── js/portal.js
|
||||
└── images/logo.svg
|
||||
```
|
||||
|
||||
### Phase 7: Configuration Updates
|
||||
|
||||
**Update `application.yml`:**
|
||||
```yaml
|
||||
server:
|
||||
servlet:
|
||||
context-path: / # Changed from /api
|
||||
session:
|
||||
timeout: 30m
|
||||
|
||||
spring:
|
||||
thymeleaf:
|
||||
prefix: classpath:/templates/
|
||||
suffix: .html
|
||||
cache: false
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Key Files to Modify
|
||||
|
||||
| File | Changes |
|
||||
|------|---------|
|
||||
| `pom.xml` | Add Thymeleaf dependencies |
|
||||
| `application.yml` | Add Thymeleaf config, change context-path |
|
||||
| `SecurityConfig.java` | Dual filter chains for API + Portal |
|
||||
| `License.java` | Add customer relationship |
|
||||
|
||||
## New Files to Create
|
||||
|
||||
| Category | Count | Files |
|
||||
|----------|-------|-------|
|
||||
| Migration | 1 | V3__add_customer_and_apikey_tables.sql |
|
||||
| Entities | 2 | Customer.java, ApiKey.java |
|
||||
| Repositories | 2 | CustomerRepository.java, ApiKeyRepository.java |
|
||||
| Services | 4 | CustomerService, ApiKeyService, PortalLicenseService, DashboardService |
|
||||
| Security | 2 | ApiKeyAuthenticationFilter, ApiKeyAuthentication |
|
||||
| Controllers | 5 | Portal controllers |
|
||||
| DTOs | 8 | Request/Response DTOs |
|
||||
| Templates | ~20 | Thymeleaf HTML templates |
|
||||
| Static | 3 | CSS, JS, images |
|
||||
|
||||
---
|
||||
|
||||
## Verification
|
||||
|
||||
1. **Build**: `./mvnw clean compile` - verify no compilation errors
|
||||
2. **Database**: Run migrations, verify tables created
|
||||
3. **Login**: Access `/portal/login`, authenticate as admin@hiveops.com
|
||||
4. **Customer CRUD**: Create, view, edit, suspend customers
|
||||
5. **License Generation**: Create license linked to customer, verify key format
|
||||
6. **API Key Generation**: Create API key, verify raw key shown once, test validation
|
||||
7. **Settings**: Edit global settings (admin only)
|
||||
8. **Existing API**: Verify `/api/v1/auth/login` still works with JWT
|
||||
File diff suppressed because it is too large
Load Diff
|
|
@ -0,0 +1,17 @@
|
|||
{
|
||||
"fetchedAt": "2026-02-27T11:16:37.669Z",
|
||||
"plugins": [
|
||||
{
|
||||
"plugin": "code-review@claude-plugins-official",
|
||||
"added_at": "2026-02-11T03:16:31.424Z",
|
||||
"reason": "just-a-test",
|
||||
"text": "This is a test #5"
|
||||
},
|
||||
{
|
||||
"plugin": "fizz@testmkt-marketplace",
|
||||
"added_at": "2026-02-12T00:00:00.000Z",
|
||||
"reason": "security",
|
||||
"text": "this is a security test"
|
||||
}
|
||||
]
|
||||
}
|
||||
|
|
@ -0,0 +1,594 @@
|
|||
{
|
||||
"version": 1,
|
||||
"fetchedAt": "2026-02-27T09:53:43.188Z",
|
||||
"counts": [
|
||||
{
|
||||
"plugin": "frontend-design@claude-plugins-official",
|
||||
"unique_installs": 247733
|
||||
},
|
||||
{
|
||||
"plugin": "context7@claude-plugins-official",
|
||||
"unique_installs": 139992
|
||||
},
|
||||
{
|
||||
"plugin": "superpowers@claude-plugins-official",
|
||||
"unique_installs": 118874
|
||||
},
|
||||
{
|
||||
"plugin": "code-review@claude-plugins-official",
|
||||
"unique_installs": 117748
|
||||
},
|
||||
{
|
||||
"plugin": "github@claude-plugins-official",
|
||||
"unique_installs": 102371
|
||||
},
|
||||
{
|
||||
"plugin": "feature-dev@claude-plugins-official",
|
||||
"unique_installs": 98821
|
||||
},
|
||||
{
|
||||
"plugin": "code-simplifier@claude-plugins-official",
|
||||
"unique_installs": 96077
|
||||
},
|
||||
{
|
||||
"plugin": "ralph-loop@claude-plugins-official",
|
||||
"unique_installs": 83061
|
||||
},
|
||||
{
|
||||
"plugin": "playwright@claude-plugins-official",
|
||||
"unique_installs": 79591
|
||||
},
|
||||
{
|
||||
"plugin": "typescript-lsp@claude-plugins-official",
|
||||
"unique_installs": 77633
|
||||
},
|
||||
{
|
||||
"plugin": "commit-commands@claude-plugins-official",
|
||||
"unique_installs": 64480
|
||||
},
|
||||
{
|
||||
"plugin": "security-guidance@claude-plugins-official",
|
||||
"unique_installs": 61288
|
||||
},
|
||||
{
|
||||
"plugin": "serena@claude-plugins-official",
|
||||
"unique_installs": 52649
|
||||
},
|
||||
{
|
||||
"plugin": "claude-md-management@claude-plugins-official",
|
||||
"unique_installs": 51259
|
||||
},
|
||||
{
|
||||
"plugin": "figma@claude-plugins-official",
|
||||
"unique_installs": 45335
|
||||
},
|
||||
{
|
||||
"plugin": "pr-review-toolkit@claude-plugins-official",
|
||||
"unique_installs": 44211
|
||||
},
|
||||
{
|
||||
"plugin": "pyright-lsp@claude-plugins-official",
|
||||
"unique_installs": 39729
|
||||
},
|
||||
{
|
||||
"plugin": "supabase@claude-plugins-official",
|
||||
"unique_installs": 37363
|
||||
},
|
||||
{
|
||||
"plugin": "atlassian@claude-plugins-official",
|
||||
"unique_installs": 32129
|
||||
},
|
||||
{
|
||||
"plugin": "agent-sdk-dev@claude-plugins-official",
|
||||
"unique_installs": 32056
|
||||
},
|
||||
{
|
||||
"plugin": "claude-code-setup@claude-plugins-official",
|
||||
"unique_installs": 31786
|
||||
},
|
||||
{
|
||||
"plugin": "ralph-wiggum@claude-plugins-official",
|
||||
"unique_installs": 27190
|
||||
},
|
||||
{
|
||||
"plugin": "explanatory-output-style@claude-plugins-official",
|
||||
"unique_installs": 26545
|
||||
},
|
||||
{
|
||||
"plugin": "plugin-dev@claude-plugins-official",
|
||||
"unique_installs": 26535
|
||||
},
|
||||
{
|
||||
"plugin": "greptile@claude-plugins-official",
|
||||
"unique_installs": 25258
|
||||
},
|
||||
{
|
||||
"plugin": "Notion@claude-plugins-official",
|
||||
"unique_installs": 22551
|
||||
},
|
||||
{
|
||||
"plugin": "hookify@claude-plugins-official",
|
||||
"unique_installs": 22399
|
||||
},
|
||||
{
|
||||
"plugin": "vercel@claude-plugins-official",
|
||||
"unique_installs": 19512
|
||||
},
|
||||
{
|
||||
"plugin": "linear@claude-plugins-official",
|
||||
"unique_installs": 18890
|
||||
},
|
||||
{
|
||||
"plugin": "learning-output-style@claude-plugins-official",
|
||||
"unique_installs": 17664
|
||||
},
|
||||
{
|
||||
"plugin": "slack@claude-plugins-official",
|
||||
"unique_installs": 17084
|
||||
},
|
||||
{
|
||||
"plugin": "playground@claude-plugins-official",
|
||||
"unique_installs": 16225
|
||||
},
|
||||
{
|
||||
"plugin": "sentry@claude-plugins-official",
|
||||
"unique_installs": 15290
|
||||
},
|
||||
{
|
||||
"plugin": "gopls-lsp@claude-plugins-official",
|
||||
"unique_installs": 15060
|
||||
},
|
||||
{
|
||||
"plugin": "csharp-lsp@claude-plugins-official",
|
||||
"unique_installs": 14766
|
||||
},
|
||||
{
|
||||
"plugin": "stripe@claude-plugins-official",
|
||||
"unique_installs": 13383
|
||||
},
|
||||
{
|
||||
"plugin": "gitlab@claude-plugins-official",
|
||||
"unique_installs": 13372
|
||||
},
|
||||
{
|
||||
"plugin": "rust-analyzer-lsp@claude-plugins-official",
|
||||
"unique_installs": 13102
|
||||
},
|
||||
{
|
||||
"plugin": "php-lsp@claude-plugins-official",
|
||||
"unique_installs": 11490
|
||||
},
|
||||
{
|
||||
"plugin": "jdtls-lsp@claude-plugins-official",
|
||||
"unique_installs": 11344
|
||||
},
|
||||
{
|
||||
"plugin": "laravel-boost@claude-plugins-official",
|
||||
"unique_installs": 10888
|
||||
},
|
||||
{
|
||||
"plugin": "huggingface-skills@claude-plugins-official",
|
||||
"unique_installs": 10390
|
||||
},
|
||||
{
|
||||
"plugin": "skill-creator@claude-plugins-official",
|
||||
"unique_installs": 10046
|
||||
},
|
||||
{
|
||||
"plugin": "clangd-lsp@claude-plugins-official",
|
||||
"unique_installs": 10030
|
||||
},
|
||||
{
|
||||
"plugin": "firebase@claude-plugins-official",
|
||||
"unique_installs": 9621
|
||||
},
|
||||
{
|
||||
"plugin": "swift-lsp@claude-plugins-official",
|
||||
"unique_installs": 9391
|
||||
},
|
||||
{
|
||||
"plugin": "coderabbit@claude-plugins-official",
|
||||
"unique_installs": 7505
|
||||
},
|
||||
{
|
||||
"plugin": "kotlin-lsp@claude-plugins-official",
|
||||
"unique_installs": 7244
|
||||
},
|
||||
{
|
||||
"plugin": "lua-lsp@claude-plugins-official",
|
||||
"unique_installs": 5985
|
||||
},
|
||||
{
|
||||
"plugin": "firecrawl@claude-plugins-official",
|
||||
"unique_installs": 5069
|
||||
},
|
||||
{
|
||||
"plugin": "circleback@claude-plugins-official",
|
||||
"unique_installs": 4411
|
||||
},
|
||||
{
|
||||
"plugin": "asana@claude-plugins-official",
|
||||
"unique_installs": 4256
|
||||
},
|
||||
{
|
||||
"plugin": "pinecone@claude-plugins-official",
|
||||
"unique_installs": 3916
|
||||
},
|
||||
{
|
||||
"plugin": "posthog@claude-plugins-official",
|
||||
"unique_installs": 3483
|
||||
},
|
||||
{
|
||||
"plugin": "claude-opus-4-5-migration@claude-plugins-official",
|
||||
"unique_installs": 2714
|
||||
},
|
||||
{
|
||||
"plugin": "sonatype-guide@claude-plugins-official",
|
||||
"unique_installs": 2544
|
||||
},
|
||||
{
|
||||
"plugin": "semgrep@claude-plugins-official",
|
||||
"unique_installs": 2477
|
||||
},
|
||||
{
|
||||
"plugin": "qodo-skills@claude-plugins-official",
|
||||
"unique_installs": 1160
|
||||
},
|
||||
{
|
||||
"plugin": "figma-mcp@claude-plugins-official",
|
||||
"unique_installs": 102
|
||||
},
|
||||
{
|
||||
"plugin": "artifact@claude-plugins-official",
|
||||
"unique_installs": 76
|
||||
},
|
||||
{
|
||||
"plugin": "example-plugin@claude-plugins-official",
|
||||
"unique_installs": 31
|
||||
},
|
||||
{
|
||||
"plugin": "ruby-lsp@claude-plugins-official",
|
||||
"unique_installs": 2
|
||||
},
|
||||
{
|
||||
"plugin": "agent-browser@claude-plugins-official",
|
||||
"unique_installs": 2
|
||||
},
|
||||
{
|
||||
"plugin": "document-skills@claude-plugins-official",
|
||||
"unique_installs": 2
|
||||
},
|
||||
{
|
||||
"plugin": "dart-lsp@claude-plugins-official",
|
||||
"unique_installs": 2
|
||||
},
|
||||
{
|
||||
"plugin": "pm@claude-plugins-official",
|
||||
"unique_installs": 2
|
||||
},
|
||||
{
|
||||
"plugin": "prd-generator@claude-plugins-official",
|
||||
"unique_installs": 1
|
||||
},
|
||||
{
|
||||
"plugin": "autonomous-loop@claude-plugins-official",
|
||||
"unique_installs": 1
|
||||
},
|
||||
{
|
||||
"plugin": "monday@claude-plugins-official",
|
||||
"unique_installs": 1
|
||||
},
|
||||
{
|
||||
"plugin": "jira@claude-plugins-official",
|
||||
"unique_installs": 1
|
||||
},
|
||||
{
|
||||
"plugin": "universal-dev@claude-plugins-official",
|
||||
"unique_installs": 1
|
||||
},
|
||||
{
|
||||
"plugin": "claude-rules-generator@claude-plugins-official",
|
||||
"unique_installs": 1
|
||||
},
|
||||
{
|
||||
"plugin": "silince-gutnebrg-builder@claude-plugins-official",
|
||||
"unique_installs": 1
|
||||
},
|
||||
{
|
||||
"plugin": "docs-search-tool@claude-plugins-official",
|
||||
"unique_installs": 1
|
||||
},
|
||||
{
|
||||
"plugin": "ralph-v2@claude-plugins-official",
|
||||
"unique_installs": 1
|
||||
},
|
||||
{
|
||||
"plugin": "prototype@claude-plugins-official",
|
||||
"unique_installs": 1
|
||||
},
|
||||
{
|
||||
"plugin": "feature-ears@claude-plugins-official",
|
||||
"unique_installs": 1
|
||||
},
|
||||
{
|
||||
"plugin": "gemini-consult@claude-plugins-official",
|
||||
"unique_installs": 1
|
||||
},
|
||||
{
|
||||
"plugin": "omnisharp-lsp@claude-plugins-official",
|
||||
"unique_installs": 1
|
||||
},
|
||||
{
|
||||
"plugin": "my-time-plugin@claude-plugins-official",
|
||||
"unique_installs": 1
|
||||
},
|
||||
{
|
||||
"plugin": "microsoft-learn@claude-plugins-official",
|
||||
"unique_installs": 1
|
||||
},
|
||||
{
|
||||
"plugin": "lean-lsp@claude-plugins-official",
|
||||
"unique_installs": 1
|
||||
},
|
||||
{
|
||||
"plugin": "doc-bootstrap@claude-plugins-official",
|
||||
"unique_installs": 1
|
||||
},
|
||||
{
|
||||
"plugin": "test-automation-generator@claude-plugins-official",
|
||||
"unique_installs": 1
|
||||
},
|
||||
{
|
||||
"plugin": "why-how-what-output-style@claude-plugins-official",
|
||||
"unique_installs": 1
|
||||
},
|
||||
{
|
||||
"plugin": "frontend-lab@claude-plugins-official",
|
||||
"unique_installs": 1
|
||||
},
|
||||
{
|
||||
"plugin": "aws-diagram@claude-plugins-official",
|
||||
"unique_installs": 1
|
||||
},
|
||||
{
|
||||
"plugin": "spec-writer@claude-plugins-official",
|
||||
"unique_installs": 1
|
||||
},
|
||||
{
|
||||
"plugin": "it-triage-system@claude-plugins-official",
|
||||
"unique_installs": 1
|
||||
},
|
||||
{
|
||||
"plugin": "n8n@claude-plugins-official",
|
||||
"unique_installs": 1
|
||||
},
|
||||
{
|
||||
"plugin": "openspec@claude-plugins-official",
|
||||
"unique_installs": 1
|
||||
},
|
||||
{
|
||||
"plugin": "beast-plan@claude-plugins-official",
|
||||
"unique_installs": 1
|
||||
},
|
||||
{
|
||||
"plugin": "airtable@claude-plugins-official",
|
||||
"unique_installs": 1
|
||||
},
|
||||
{
|
||||
"plugin": "project-collaboration-system@claude-plugins-official",
|
||||
"unique_installs": 1
|
||||
},
|
||||
{
|
||||
"plugin": "latex2cn@claude-plugins-official",
|
||||
"unique_installs": 1
|
||||
},
|
||||
{
|
||||
"plugin": "freshservice@claude-plugins-official",
|
||||
"unique_installs": 1
|
||||
},
|
||||
{
|
||||
"plugin": "hosts-db@claude-plugins-official",
|
||||
"unique_installs": 1
|
||||
},
|
||||
{
|
||||
"plugin": "gdscript-lsp@claude-plugins-official",
|
||||
"unique_installs": 1
|
||||
},
|
||||
{
|
||||
"plugin": "dj-content-creator@claude-plugins-official",
|
||||
"unique_installs": 1
|
||||
},
|
||||
{
|
||||
"plugin": "dev-workflow@claude-plugins-official",
|
||||
"unique_installs": 1
|
||||
},
|
||||
{
|
||||
"plugin": "pyrefly-lsp@claude-plugins-official",
|
||||
"unique_installs": 1
|
||||
},
|
||||
{
|
||||
"plugin": "terraform-ls@claude-plugins-official",
|
||||
"unique_installs": 1
|
||||
},
|
||||
{
|
||||
"plugin": "user-journey-analysis@claude-plugins-official",
|
||||
"unique_installs": 1
|
||||
},
|
||||
{
|
||||
"plugin": "agent-teams@claude-plugins-official",
|
||||
"unique_installs": 1
|
||||
},
|
||||
{
|
||||
"plugin": "design-principles@claude-plugins-official",
|
||||
"unique_installs": 1
|
||||
},
|
||||
{
|
||||
"plugin": "memory-agent@claude-plugins-official",
|
||||
"unique_installs": 1
|
||||
},
|
||||
{
|
||||
"plugin": "ppt-loop@claude-plugins-official",
|
||||
"unique_installs": 1
|
||||
},
|
||||
{
|
||||
"plugin": "bun-typescript@claude-plugins-official",
|
||||
"unique_installs": 1
|
||||
},
|
||||
{
|
||||
"plugin": "codex-skills@claude-plugins-official",
|
||||
"unique_installs": 1
|
||||
},
|
||||
{
|
||||
"plugin": "rs-commands@claude-plugins-official",
|
||||
"unique_installs": 1
|
||||
},
|
||||
{
|
||||
"plugin": "hardworking@claude-plugins-official",
|
||||
"unique_installs": 1
|
||||
},
|
||||
{
|
||||
"plugin": "perlnavigator-lsp@claude-plugins-official",
|
||||
"unique_installs": 1
|
||||
},
|
||||
{
|
||||
"plugin": "miro@claude-plugins-official",
|
||||
"unique_installs": 1
|
||||
},
|
||||
{
|
||||
"plugin": "amber-electric@claude-plugins-official",
|
||||
"unique_installs": 1
|
||||
},
|
||||
{
|
||||
"plugin": "ai-pm-copilot@claude-plugins-official",
|
||||
"unique_installs": 1
|
||||
},
|
||||
{
|
||||
"plugin": "plan-guardian@claude-plugins-official",
|
||||
"unique_installs": 1
|
||||
},
|
||||
{
|
||||
"plugin": "ccpm@claude-plugins-official",
|
||||
"unique_installs": 1
|
||||
},
|
||||
{
|
||||
"plugin": "csharp-roslyn-lsp@claude-plugins-official",
|
||||
"unique_installs": 1
|
||||
},
|
||||
{
|
||||
"plugin": "dune@claude-plugins-official",
|
||||
"unique_installs": 1
|
||||
},
|
||||
{
|
||||
"plugin": "ocpm@claude-plugins-official",
|
||||
"unique_installs": 1
|
||||
},
|
||||
{
|
||||
"plugin": "dokploy@claude-plugins-official",
|
||||
"unique_installs": 1
|
||||
},
|
||||
{
|
||||
"plugin": "continual-learning@claude-plugins-official",
|
||||
"unique_installs": 1
|
||||
},
|
||||
{
|
||||
"plugin": "codeceptjs-e2e-tests@claude-plugins-official",
|
||||
"unique_installs": 1
|
||||
},
|
||||
{
|
||||
"plugin": "forge-security@claude-plugins-official",
|
||||
"unique_installs": 1
|
||||
},
|
||||
{
|
||||
"plugin": "frappe-print-format@claude-plugins-official",
|
||||
"unique_installs": 1
|
||||
},
|
||||
{
|
||||
"plugin": "home-assistant-skills@claude-plugins-official",
|
||||
"unique_installs": 1
|
||||
},
|
||||
{
|
||||
"plugin": "grid-design@claude-plugins-official",
|
||||
"unique_installs": 1
|
||||
},
|
||||
{
|
||||
"plugin": "any-chat-completions@claude-plugins-official",
|
||||
"unique_installs": 1
|
||||
},
|
||||
{
|
||||
"plugin": "typescript-native-lsp@claude-plugins-official",
|
||||
"unique_installs": 1
|
||||
},
|
||||
{
|
||||
"plugin": "vectorhub-memory@claude-plugins-official",
|
||||
"unique_installs": 1
|
||||
},
|
||||
{
|
||||
"plugin": "hello-world@claude-plugins-official",
|
||||
"unique_installs": 1
|
||||
},
|
||||
{
|
||||
"plugin": "datadog@claude-plugins-official",
|
||||
"unique_installs": 1
|
||||
},
|
||||
{
|
||||
"plugin": "creative-music-output-style@claude-plugins-official",
|
||||
"unique_installs": 1
|
||||
},
|
||||
{
|
||||
"plugin": "claude-memory@claude-plugins-official",
|
||||
"unique_installs": 1
|
||||
},
|
||||
{
|
||||
"plugin": "vercel-best-practices@claude-plugins-official",
|
||||
"unique_installs": 1
|
||||
},
|
||||
{
|
||||
"plugin": "vertical-builder@claude-plugins-official",
|
||||
"unique_installs": 1
|
||||
},
|
||||
{
|
||||
"plugin": "prototyper@claude-plugins-official",
|
||||
"unique_installs": 1
|
||||
},
|
||||
{
|
||||
"plugin": "review-submission@claude-plugins-official",
|
||||
"unique_installs": 1
|
||||
},
|
||||
{
|
||||
"plugin": "n8n-skills@claude-plugins-official",
|
||||
"unique_installs": 1
|
||||
},
|
||||
{
|
||||
"plugin": "lorikeet-qa@claude-plugins-official",
|
||||
"unique_installs": 1
|
||||
},
|
||||
{
|
||||
"plugin": "dev-sandbox@claude-plugins-official",
|
||||
"unique_installs": 1
|
||||
},
|
||||
{
|
||||
"plugin": "cursor-team-kit@claude-plugins-official",
|
||||
"unique_installs": 1
|
||||
},
|
||||
{
|
||||
"plugin": "pdf2latex@claude-plugins-official",
|
||||
"unique_installs": 1
|
||||
},
|
||||
{
|
||||
"plugin": "gitlab-mr-review@claude-plugins-official",
|
||||
"unique_installs": 1
|
||||
},
|
||||
{
|
||||
"plugin": "context@claude-plugins-official",
|
||||
"unique_installs": 1
|
||||
},
|
||||
{
|
||||
"plugin": "ewo-discovery-skill@claude-plugins-official",
|
||||
"unique_installs": 1
|
||||
},
|
||||
{
|
||||
"plugin": "backend-specialist@claude-plugins-official",
|
||||
"unique_installs": 1
|
||||
}
|
||||
]
|
||||
}
|
||||
|
|
@ -0,0 +1,55 @@
|
|||
{
|
||||
"version": 2,
|
||||
"plugins": {
|
||||
"frontend-design@claude-plugins-official": [
|
||||
{
|
||||
"scope": "user",
|
||||
"installPath": "/home/directlx/.claude/plugins/cache/claude-plugins-official/frontend-design/55b58ec6e564",
|
||||
"version": "55b58ec6e564",
|
||||
"installedAt": "2026-02-21T12:07:23.743Z",
|
||||
"lastUpdated": "2026-02-25T12:43:03.849Z",
|
||||
"gitCommitSha": "aa296ec81e8ccb49c9784f167c2c0aa625a86cec"
|
||||
}
|
||||
],
|
||||
"code-review@claude-plugins-official": [
|
||||
{
|
||||
"scope": "user",
|
||||
"installPath": "/home/directlx/.claude/plugins/cache/claude-plugins-official/code-review/55b58ec6e564",
|
||||
"version": "55b58ec6e564",
|
||||
"installedAt": "2026-02-21T12:07:23.791Z",
|
||||
"lastUpdated": "2026-02-25T12:43:03.841Z",
|
||||
"gitCommitSha": "aa296ec81e8ccb49c9784f167c2c0aa625a86cec"
|
||||
}
|
||||
],
|
||||
"commit-commands@claude-plugins-official": [
|
||||
{
|
||||
"scope": "user",
|
||||
"installPath": "/home/directlx/.claude/plugins/cache/claude-plugins-official/commit-commands/55b58ec6e564",
|
||||
"version": "55b58ec6e564",
|
||||
"installedAt": "2026-02-21T12:07:23.824Z",
|
||||
"lastUpdated": "2026-02-25T12:43:03.862Z",
|
||||
"gitCommitSha": "aa296ec81e8ccb49c9784f167c2c0aa625a86cec"
|
||||
}
|
||||
],
|
||||
"claude-md-management@claude-plugins-official": [
|
||||
{
|
||||
"scope": "user",
|
||||
"installPath": "/home/directlx/.claude/plugins/cache/claude-plugins-official/claude-md-management/1.0.0",
|
||||
"version": "1.0.0",
|
||||
"installedAt": "2026-02-21T12:07:23.854Z",
|
||||
"lastUpdated": "2026-02-21T12:07:23.854Z",
|
||||
"gitCommitSha": "aa296ec81e8ccb49c9784f167c2c0aa625a86cec"
|
||||
}
|
||||
],
|
||||
"plugin-dev@claude-plugins-official": [
|
||||
{
|
||||
"scope": "user",
|
||||
"installPath": "/home/directlx/.claude/plugins/cache/claude-plugins-official/plugin-dev/55b58ec6e564",
|
||||
"version": "55b58ec6e564",
|
||||
"installedAt": "2026-02-21T12:07:23.905Z",
|
||||
"lastUpdated": "2026-02-25T12:43:03.890Z",
|
||||
"gitCommitSha": "aa296ec81e8ccb49c9784f167c2c0aa625a86cec"
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
|
|
@ -0,0 +1,18 @@
|
|||
{
|
||||
"anthropic-agent-skills": {
|
||||
"source": {
|
||||
"source": "github",
|
||||
"repo": "anthropics/skills"
|
||||
},
|
||||
"installLocation": "/home/directlx/.claude/plugins/marketplaces/anthropic-agent-skills",
|
||||
"lastUpdated": "2026-02-21T11:56:03.861Z"
|
||||
},
|
||||
"claude-plugins-official": {
|
||||
"source": {
|
||||
"source": "github",
|
||||
"repo": "anthropics/claude-plugins-official"
|
||||
},
|
||||
"installLocation": "/home/directlx/.claude/plugins/marketplaces/claude-plugins-official",
|
||||
"lastUpdated": "2026-02-27T11:18:03.157Z"
|
||||
}
|
||||
}
|
||||
|
|
@ -0,0 +1,184 @@
|
|||
# Project Memory: dlx-ansible
|
||||
|
||||
## Infrastructure Overview
|
||||
- **NPM Server**: nginx (192.168.200.71) - Nginx Proxy Manager for SSL termination
|
||||
- **Application Servers**: hiveops (192.168.200.112), smartjournal (192.168.200.114)
|
||||
- **CI/CD Server**: jenkins (192.168.200.91) - Jenkins + SonarQube
|
||||
- All servers use `dlxadmin` user with passwordless sudo
|
||||
|
||||
## Critical Learnings
|
||||
|
||||
### SSL Certificate Offloading with Nginx Proxy Manager
|
||||
|
||||
**Problem**: Spring Boot applications behind NPM experience redirect loops when accessed via HTTPS.
|
||||
|
||||
**Root Cause**: Spring Boot doesn't trust `X-Forwarded-*` headers by default. When NPM terminates SSL and forwards HTTP to backend, Spring sees HTTP and redirects to HTTPS, creating infinite loop.
|
||||
|
||||
**Solution**: Configure Spring Boot to trust forwarded headers:
|
||||
```yaml
|
||||
environment:
|
||||
SERVER_FORWARD_HEADERS_STRATEGY: native
|
||||
SERVER_USE_FORWARD_HEADERS: true
|
||||
```
|
||||
|
||||
**Key Points**:
|
||||
- Containers must be **recreated** (not restarted) for env vars to take effect
|
||||
- Verify with: `curl -I -H 'X-Forwarded-Proto: https' http://localhost:8080/`
|
||||
- Success indicator: `Strict-Transport-Security` header in response
|
||||
- Documentation: `docs/SSL-OFFLOADING-FIX.md`
|
||||
|
||||
### Docker Compose Best Practices
|
||||
|
||||
**Environment Variable Loading**:
|
||||
- Use `--env-file` flag when .env is not in same directory as compose file
|
||||
- Example: `docker compose -f docker/docker-compose.yml --env-file .env up -d`
|
||||
|
||||
**Container Updates**:
|
||||
- Restart: Keeps existing container, doesn't apply env changes
|
||||
- Recreate: Removes old container, creates new one with latest env/config
|
||||
- Always recreate when changing environment variables
|
||||
|
||||
### HiveOps Application Structure
|
||||
|
||||
**Main Deployment** (`/opt/hiveops-deploy/`):
|
||||
- Full microservices stack
|
||||
- Services: incident-backend, incident-frontend, mgmt, remote
|
||||
- Managed via docker-compose
|
||||
|
||||
**Standalone Deployment** (`/home/hiveops/`):
|
||||
- Simplified incident management system
|
||||
- Separate from main deployment
|
||||
- Used for direct hiveops.directlx.dev access
|
||||
|
||||
### Jenkins Firewall Blocking (2026-02-09)
|
||||
|
||||
**Problem**: Jenkins and SonarQube were unreachable from network.
|
||||
|
||||
**Root Cause**: Server had no host_vars file, inherited default firewall config (SSH only).
|
||||
|
||||
**Solution**: Created `host_vars/jenkins.yml` with ports 22, 8080 (Jenkins), 9000 (SonarQube).
|
||||
|
||||
**Quick Fix**:
|
||||
```bash
|
||||
ansible jenkins -m community.general.ufw -a "rule=allow port=8080 proto=tcp" -b
|
||||
ansible jenkins -m community.general.ufw -a "rule=allow port=9000 proto=tcp" -b
|
||||
ansible jenkins -m shell -a "docker start postgresql sonarqube" -b
|
||||
```
|
||||
|
||||
**Key Points**:
|
||||
- Jenkins runs as Java system service (not Docker) on port 8080
|
||||
- SonarQube runs in Docker with PostgreSQL backend
|
||||
- Always create host_vars file for servers with specific firewall needs
|
||||
- Documentation: `docs/JENKINS-CONNECTIVITY-FIX.md`
|
||||
|
||||
## File Locations
|
||||
|
||||
### Host Variables
|
||||
- `/source/dlx-src/dlx-ansible/host_vars/npm.yml` - NPM firewall config
|
||||
- `/source/dlx-src/dlx-ansible/host_vars/smartjournal.yml` - SmartJournal settings
|
||||
- `/source/dlx-src/dlx-ansible/host_vars/jenkins.yml` - Jenkins/SonarQube firewall config
|
||||
|
||||
## Storage Remediation (2026-02-08)
|
||||
|
||||
**Critical Issues Identified**:
|
||||
1. proxmox-00 root FS: 84.5% full (CRITICAL)
|
||||
2. proxmox-01 dlx-docker: 81.1% full (HIGH)
|
||||
3. Unused containers: 1.2 TB allocated
|
||||
4. SonarQube: 354 GB (82% of allocation)
|
||||
|
||||
**Remediation Playbooks Created**:
|
||||
- `remediate-storage-critical-issues.yml`: Log cleanup, Docker prune, audits
|
||||
- `remediate-docker-storage.yml`: Deep Docker cleanup + automation
|
||||
- `remediate-stopped-containers.yml`: Safe container removal with backups
|
||||
- `configure-storage-monitoring.yml`: Proactive monitoring (5/10 min checks)
|
||||
|
||||
**Documentation**:
|
||||
- `STORAGE-AUDIT.md`: Full hardware/storage analysis (550 lines)
|
||||
- `STORAGE-REMEDIATION-GUIDE.md`: Step-by-step execution (480 lines)
|
||||
- `REMEDIATION-SUMMARY.md`: Quick reference (300 lines)
|
||||
|
||||
**Expected Results**:
|
||||
- Total space freed: 1-2 TB
|
||||
- proxmox-00: 84.5% → 70% (10-15 GB freed)
|
||||
- proxmox-01: 81.1% → 70% (50-150 GB freed)
|
||||
- Automation prevents regrowth (weekly prune + hourly monitoring)
|
||||
|
||||
**Commit**: 90ed5c1
|
||||
|
||||
## Common Tasks
|
||||
|
||||
### Fix SSL Offloading for Spring Boot Service
|
||||
1. Add env vars to .env: `SERVER_FORWARD_HEADERS_STRATEGY=native`, `SERVER_USE_FORWARD_HEADERS=true`
|
||||
2. Add to docker-compose environment section
|
||||
3. Recreate container: `docker stop <name> && docker rm <name> && docker compose up -d <service>`
|
||||
4. Verify: Check for `Strict-Transport-Security` header
|
||||
|
||||
### Apply Firewall Configuration
|
||||
- Firewall is managed by common role (roles/common/tasks/security.yml)
|
||||
- Controlled per-host via `common_firewall_enabled` and `common_firewall_allowed_ports`
|
||||
- Some hosts (docker, hiveops, smartjournal) have firewall disabled for Docker networking
|
||||
|
||||
### Run Storage Remediation
|
||||
1. Test with `--check`: `ansible-playbook playbooks/remediate-storage-critical-issues.yml --check`
|
||||
2. Deploy monitoring: `ansible-playbook playbooks/configure-storage-monitoring.yml -l proxmox`
|
||||
3. Fix proxmox-00: `ansible-playbook playbooks/remediate-storage-critical-issues.yml -l proxmox-00`
|
||||
4. Fix proxmox-01: `ansible-playbook playbooks/remediate-docker-storage.yml -l proxmox-01`
|
||||
5. Monitor: `tail -f /var/log/storage-monitor.log`
|
||||
6. Remove containers (optional): `ansible-playbook playbooks/remediate-stopped-containers.yml -e dry_run=false`
|
||||
|
||||
## Kubernetes Cluster Setup (2026-02-09)
|
||||
|
||||
**Problem**: Attempted to install K3s on LXC containers - failed due to kernel module limitations.
|
||||
|
||||
**Root Cause**: LXC containers share host kernel and cannot load required modules (br_netfilter, overlay).
|
||||
|
||||
**Solution**: Delete LXC containers, create proper QEMU/KVM VMs with Ubuntu 24.04 LTS.
|
||||
|
||||
**Cluster Design**:
|
||||
- 3-node HA cluster with embedded etcd
|
||||
- All nodes as control plane servers
|
||||
- K3s v1.31.4+k3s1
|
||||
- IPs: 192.168.200.215/216/217
|
||||
- 4GB RAM, 4 CPU cores, 50GB disk per node
|
||||
|
||||
**Key Learnings**:
|
||||
- LXC containers NOT suitable for Kubernetes
|
||||
- Always verify: `systemd-detect-virt` should return "kvm" not "lxc"
|
||||
- Use Ubuntu LTS releases (24.04) not interim releases (24.10)
|
||||
- Interim releases have only 9 months support
|
||||
- Ubuntu 24.10 is EOL (July 2025), repositories archived
|
||||
|
||||
**Files Created**:
|
||||
- `playbooks/install-k3s-cluster.yml` - HA K3s installation
|
||||
- `host_vars/dlx-kube-{01,02,03}.yml` - Firewall configs
|
||||
- `docs/K3S-INSTALLATION-GUIDE.md` - Complete guide
|
||||
- `docs/PROXMOX-VM-SETUP-FOR-K3S.md` - VM creation guide
|
||||
- `docs/SESSION-PLAN-K3S-DEPLOYMENT.md` - Next session plan
|
||||
- `scripts/create-k3s-vms.sh` - VM creation automation
|
||||
|
||||
**Next Steps**: User creates VMs, then run K3s installation playbook.
|
||||
|
||||
## SmartJournal Kafka Fix (2026-02-20)
|
||||
|
||||
**Problem**: `sj_api` logs `localhost/127.0.0.1:9092` warnings on startup, takes ~60s to start.
|
||||
|
||||
**Root Causes**:
|
||||
1. `kafkaservice=kafka:9092` used the external listener — Kafka advertises `192.168.200.114:9092` back to containers, resolving to localhost
|
||||
2. Spring Boot `dev` profile hardcodes `localhost:9092` for admin client — `KAFKASERVICE` env var only overrides producer/consumer, not admin client
|
||||
|
||||
**Fix**:
|
||||
- `.env`: `kafkaservice=kafka:29092` (use internal PLAINTEXT listener)
|
||||
- `docker-compose-prod.yaml` api service: add `SPRING_KAFKA_BOOTSTRAP_SERVERS=${kafkaservice}`
|
||||
|
||||
**Result**: No warnings, startup ~20s instead of ~60s
|
||||
|
||||
**Also fixed**:
|
||||
- Typo `mfa_enabled=fasle` → `false` in .env (caused boolean parse crash)
|
||||
- Duplicate hyphenated env vars `${saml-mapper-graph-proxy-port}` — shell treats hyphens as default syntax, passes literal string instead of value
|
||||
|
||||
**Documentation**: `docs/KAFKA-LOCALHOST-FIX.md`
|
||||
|
||||
## Security Notes
|
||||
- Only trust forwarded headers when backend is not internet-accessible
|
||||
- NPM server (192.168.200.71) should be only server that can reach backend ports
|
||||
- Backend ports should bind to localhost only: `127.0.0.1:8080:8080`
|
||||
|
|
@ -0,0 +1,5 @@
|
|||
# Project Memory: claude-code-java
|
||||
|
||||
## User Preferences
|
||||
|
||||
- **Never commit or push** anything in this repository. Do not run `git commit` or `git push` under any circumstances, even if asked to as part of a workflow.
|
||||
|
|
@ -0,0 +1,3 @@
|
|||
## User Preferences
|
||||
|
||||
- **Never commit anything** — do not run `git commit` or any git operation that modifies history in this repository.
|
||||
|
|
@ -0,0 +1,110 @@
|
|||
# HiveOps Agent — Project Memory
|
||||
|
||||
## Key Architecture
|
||||
|
||||
- **Multi-module Maven project** (Java 21, fat JAR via maven-shade-plugin)
|
||||
- **Main class:** `com.hiveops.AgentApplication`
|
||||
- **Fat JAR output:** `hiveops-app/target/hiveops-{version}-jar-with-dependencies.jar`
|
||||
- **Current version:** `3.0.2-SNAPSHOT` (next dev after `3.0.2` release)
|
||||
- **Git remote:** `http://192.168.200.102/hiveops/hiveops-agent.git` (Gitea)
|
||||
|
||||
## Service Endpoints (Production)
|
||||
|
||||
- `server.endpoint` → `https://api.bcos.cloud/mgmt` — fleet management, file uploads, command polling
|
||||
- `incident.endpoint` → `https://api.bcos.cloud/incident` — heartbeat, journal events, connection status
|
||||
|
||||
## Known Bug Fixed (2026-02-27)
|
||||
|
||||
**Heartbeat was going to the wrong service.**
|
||||
|
||||
`HttpHeartbeat` (in `AgentApplication.init()`) was created with `settings` (mgmt endpoint), but the
|
||||
handler that updates `lastHeartbeat` in the incident DB is `AtmAgentController.heartbeat()` in
|
||||
hiveops-incident at `{incident.endpoint}/atm/heartbeat`.
|
||||
|
||||
Fix: `AgentApplication.java:472-474` — now loads `incidentSettings` from `"incident"` prefix and
|
||||
passes those to `HttpHeartbeat`. Committed as `5cb5d65`.
|
||||
|
||||
## Heartbeat Details
|
||||
|
||||
- **Sent from:** `HttpHeartbeat` (`hiveops-core/.../http/HttpHeartbeat.java`)
|
||||
- **URL:** `{incident.endpoint}/atm/heartbeat` (PUT)
|
||||
- **Payload:** `{ country, name, logtype, heartbeat (unix ms) }`
|
||||
- **Interval:** `heartbeat.interval` property (default 5 min)
|
||||
- **Received by:** `AtmAgentController.heartbeat()` in hiveops-incident
|
||||
- **Connection status thresholds:** CONNECTED ≤15 min, DISCONNECTED >15 min, NEVER_CONNECTED = null
|
||||
|
||||
## Distribution Build System
|
||||
|
||||
**Script:** `deployment/build-dist.sh`
|
||||
|
||||
```bash
|
||||
# Standard full release (all platforms)
|
||||
deployment/build-dist.sh --release --platform all
|
||||
|
||||
# Patch release (JAR only, no config overwrite)
|
||||
deployment/build-dist.sh --release --patch --platform all
|
||||
|
||||
# Scotia variant (includes Scotiabank ext configs)
|
||||
deployment/build-dist.sh --release --scotia --platform all
|
||||
```
|
||||
|
||||
**Output:** `deployment/dist/hiveops-agent-{version}-{variant}-{platform}.{tar.gz|zip}`
|
||||
|
||||
**Variants:** `standard`, `patch`, `scotia`
|
||||
|
||||
## Patch vs Full Release
|
||||
|
||||
| | Full | Patch |
|
||||
|--|--|--|
|
||||
| hiveops-agent.jar | ✓ | ✓ |
|
||||
| hiveops.properties | ✓ | ✗ |
|
||||
| ext/*.properties | ✓ | ✗ |
|
||||
| log4j2.xml | ✓ | ✗ |
|
||||
| install script | install.sh / install.cmd | patch.sh / patch.cmd |
|
||||
|
||||
Patch installer: stops service → backs up JAR (timestamped `.bak`) → installs new JAR → restarts.
|
||||
|
||||
## Version Management Workflow
|
||||
|
||||
```bash
|
||||
# 1. Bump to new SNAPSHOT (e.g. 3.0.1 → 3.0.2)
|
||||
mvn versions:set -DnewVersion=3.0.2-SNAPSHOT -DgenerateBackupPoms=false
|
||||
|
||||
# 2. Build release (--release strips SNAPSHOT, builds 3.0.2, reverts POMs to 3.0.2-SNAPSHOT)
|
||||
deployment/build-dist.sh --release --platform all
|
||||
|
||||
# 3. Commit + tag
|
||||
git add pom.xml */pom.xml
|
||||
git commit -m "Bump version to 3.0.2-SNAPSHOT"
|
||||
git tag v3.0.2
|
||||
git push && git push origin v3.0.2
|
||||
```
|
||||
|
||||
## Key File Locations
|
||||
|
||||
| File | Path |
|
||||
|------|------|
|
||||
| Main config (Linux default) | `hiveops-app/src/main/resources/hiveops.properties` |
|
||||
| Main config (Windows override) | `deployment/windows/hiveops.properties` |
|
||||
| Windows ext configs | `deployment/windows/ext/` |
|
||||
| Linux ext configs | `hiveops-app/src/main/resources/ext/` |
|
||||
| Windows scripts | `deployment/windows/scripts/` (incl. startupproj.bat, stopagent.bat) |
|
||||
| Linux scripts | `hiveops-app/src/main/resources/scripts/` |
|
||||
| Patch installer (Linux) | `deployment/linux/patch.sh` |
|
||||
| Patch installer (Windows) | `deployment/windows/patch.cmd` |
|
||||
| Build script | `deployment/build-dist.sh` |
|
||||
| Linux installer | `deployment/linux/install.sh` |
|
||||
| Windows installer | `deployment/windows/install.cmd` |
|
||||
|
||||
## Windows Install Paths
|
||||
|
||||
- Install dir: `C:\hiveops-agent\`
|
||||
- Start: `C:\hiveops-agent\startupproj.bat`
|
||||
- Stop: `C:\hiveops-agent\stopagent.bat` (kills javaw.exe with hiveops in cmdline)
|
||||
|
||||
## Linux Install Paths
|
||||
|
||||
- Install dir: `/opt/hiveops/`
|
||||
- Config dir: `/etc/hiveops/`
|
||||
- Log dir: `/var/log/hiveops/`
|
||||
- Service: `hiveops-agent` (systemd)
|
||||
|
|
@ -0,0 +1,57 @@
|
|||
# HiveOps Browser Project Memory
|
||||
|
||||
## Architecture
|
||||
- **Browser**: Electron app at `/source/hiveops-src/hiveops-browser`
|
||||
- **Mgmt Server**: Spring Boot at `/source/hiveops-src/hiveops-mgmt`
|
||||
- IPC pattern: kebab-case channels (`get-config`), camelCase in preload (`getConfig`)
|
||||
- API client returns `{ success, data }` or `{ success, error, status, code }`
|
||||
- DTOs use Lombok `@Data @Builder @NoArgsConstructor @AllArgsConstructor`
|
||||
- Services use `@RequiredArgsConstructor @Slf4j`, `@Transactional(readOnly=true)` for reads
|
||||
- Controllers use `@RestController @RequestMapping("/api/v1/...")` with OpenAPI annotations
|
||||
- Security: 3 filter chains (API@Order1, Portal@Order2, Default@Order3)
|
||||
- Flyway migrations in `db/migration/`, H2 dev data in `db/h2/data.sql` (MERGE INTO syntax)
|
||||
- H2 uses `CHAR(10)` for newlines (no `E'...\n...'` like PostgreSQL)
|
||||
|
||||
## Key Files
|
||||
- `src/main/main.js` - Main process, IPC handlers, window management (~1400 lines)
|
||||
- `src/main/api-client.js` - Axios-based API client
|
||||
- `src/main/preload.js` - contextBridge IPC exposure
|
||||
- Window pattern: check if exists, focus; else create BrowserWindow with preload
|
||||
|
||||
## Legal Content (Added Feb 2026)
|
||||
- Legal API: `GET /api/v1/legal` and `GET /api/v1/legal/{section}` (public, no auth)
|
||||
- 4 settings in `global_settings`: `legal.copyright`, `legal.license`, `legal.usagePolicy`, `legal.disclaimers`
|
||||
- About page has tabbed layout: Info, License, Usage Policy, Disclaimers
|
||||
- About window size: 650x700
|
||||
|
||||
## DevOps Scripts Pattern (Standardized Feb 2026)
|
||||
All Spring Boot microservices follow this standardized devops-scripts structure:
|
||||
```
|
||||
devops-scripts/
|
||||
├── build-and-push.sh # Build Docker image, push to registry
|
||||
├── deploy.sh # Deploy with docker-compose or docker run
|
||||
├── docker/
|
||||
│ └── .env.example # Service-specific environment template
|
||||
└── ansible/ # Ansible playbooks (if applicable)
|
||||
```
|
||||
|
||||
**Key Features:**
|
||||
- Auto-detect version from `pom.xml` using `grep -oP '<version>\K[^<]+' pom.xml | head -1`
|
||||
- Registry authentication with REGISTRY_USERNAME/REGISTRY_PASSWORD
|
||||
- Tag both `$VERSION` and `latest` (if version != latest)
|
||||
- Deploy script auto-detects `docker-compose.prod.yml` or `docker-compose.yml`
|
||||
- Comprehensive .env.example with all service-specific variables
|
||||
- Health checks and status reporting
|
||||
|
||||
**When Creating New Microservice:**
|
||||
1. Copy devops-scripts/ from hiveops-incident, hiveops-auth, or hiveops-mgmt
|
||||
2. Update service name in build-and-push.sh and deploy.sh
|
||||
3. Customize docker/.env.example with service-specific variables
|
||||
4. Create docker-compose.prod.yml with external PostgreSQL
|
||||
5. Ensure Dockerfile follows multi-stage Alpine pattern
|
||||
|
||||
**Standard Ports:**
|
||||
- hiveops-mgmt: 8080
|
||||
- hiveops-incident: 8081
|
||||
- hiveops-auth: 8082
|
||||
- (next service: 8083, etc.)
|
||||
|
|
@ -0,0 +1,15 @@
|
|||
# HiveOps Incident - Key Learnings
|
||||
|
||||
## Project Structure
|
||||
- Frontend: SvelteKit at `frontend/`, backend: Spring Boot at `backend/`
|
||||
- CSS is split between standalone `.css` files (Dashboard, IncidentList, AtmHistory, JournalEvents, CreateIncident) and scoped `<style>` blocks in `.svelte` files (AtmProperties, IncidentWorkflow, MultiSelectDropdown, AtmInfoCard, AtmSelector)
|
||||
- Global styles: `app.css` (`:root` vars) and `App.svelte` (global selectors with `:global()`)
|
||||
|
||||
## Typography System (implemented)
|
||||
- All font sizes use CSS custom properties defined in `app.css :root`
|
||||
- Variables: `--font-size-page-title` (1.4rem), `--font-size-section-title` (1.1rem), `--font-size-card-title` (0.95rem), `--font-size-body` (0.95rem), `--font-size-body-sm` (0.9rem), `--font-size-label` (0.85rem), `--font-size-caption` (0.8rem), `--font-size-tiny` (0.75rem), `--font-size-stat-value` (2rem), `--font-size-icon` (1.1rem), `--font-size-icon-sm` (0.9rem), `--font-size-subtitle` (0.85rem)
|
||||
- `font-family` removed from `app.css` `:root`, kept in `App.svelte` global
|
||||
|
||||
## Build
|
||||
- `cd frontend && npm run build` - quick (~2s), pre-existing unused CSS selector warnings in App.svelte dark mode styles are normal
|
||||
- Git hosting: Gitea (not GitHub/GitLab)
|
||||
|
|
@ -0,0 +1,45 @@
|
|||
# HiveOps Management Portal - Memory
|
||||
|
||||
## Admin Password Reset Feature
|
||||
|
||||
Successfully implemented admin password reset functionality for portal users.
|
||||
|
||||
### Key Implementation Details
|
||||
|
||||
**Backend Components:**
|
||||
- `AuditLog.java`: Added USER_PASSWORD_RESET, USER_ENABLED, USER_DISABLED enum values
|
||||
- `AuditService.java`: Added logPasswordReset(), logUserEnabled(), logUserDisabled() methods
|
||||
- `UserRepository.java`: Added search() method with JPQL query for email/name search
|
||||
- `ResetPasswordRequest.java`: DTO with password validation (min 8, max 100 chars)
|
||||
- `UserService.java`: Enhanced with findAll(), searchUsers(), resetPassword(), enableUser(), disableUser()
|
||||
- `PortalUserController.java`: New controller at /portal/users with ADMIN role authorization
|
||||
- `SecurityConfig.java`: Added authorization rule for /portal/users/** requiring ADMIN role
|
||||
|
||||
**Frontend Components:**
|
||||
- `list.html`: User list with search, role filter, and pagination
|
||||
- `view.html`: User details with action buttons (reset password, enable/disable)
|
||||
- `reset-password.html`: Password reset form with validation
|
||||
- `base.html`: Added Users menu item in sidebar (ADMIN only)
|
||||
|
||||
### Security Patterns
|
||||
- BCrypt password encoding (strength 12)
|
||||
- @PreAuthorize("hasRole('ADMIN')") on controller
|
||||
- Prevents admin from disabling own account
|
||||
- All actions logged to audit_logs table with admin email, target user email, and IP address
|
||||
|
||||
### Common Patterns in Portal
|
||||
- Flash messages via RedirectAttributes (success/error)
|
||||
- getClientIp() helper extracts IP from X-Forwarded-For or remote address
|
||||
- Pagination with PageRequest.of(page, size, Sort)
|
||||
- Bootstrap 5 styling with badges for status/role
|
||||
- Thymeleaf validation with .invalid-feedback for errors
|
||||
|
||||
### Testing Checklist
|
||||
1. Login as admin@directlx.dev / admin123
|
||||
2. Navigate to Users menu (visible to ADMIN only)
|
||||
3. Search users by email/name
|
||||
4. View user details
|
||||
5. Reset password (validate min 8 chars, matching confirmation)
|
||||
6. Enable/disable user (cannot disable self)
|
||||
7. Verify audit log entries created
|
||||
8. Test login with new password
|
||||
|
|
@ -0,0 +1,222 @@
|
|||
# Project Memory
|
||||
|
||||
## Git Preferences
|
||||
|
||||
- **Default branch**: Always use `main` instead of `master`
|
||||
- When initializing repos: `git init -b main`
|
||||
- When creating first commit, use `main` branch
|
||||
- When pushing: `git push -u origin main`
|
||||
|
||||
- **Standardized Git Identity**: All HiveOps repositories use consistent git configuration
|
||||
- **User name**: `directlx`
|
||||
- **User email**: `directlx.dev@gmail.com`
|
||||
- Set in each repository: `git config user.name "directlx" && git config user.email "directlx.dev@gmail.com"`
|
||||
- Verify before committing: `git config user.name && git config user.email`
|
||||
- All 12 HiveOps repositories have been standardized (as of 2026-02-15)
|
||||
|
||||
## Documentation Organization
|
||||
|
||||
- **Markdown files location**: Always place documentation `.md` files in the `docs/` directory
|
||||
- Exception: `README.md` stays in repository root
|
||||
- All other `.md` files go in `docs/`
|
||||
- Update references in `README.md` to point to `docs/` paths
|
||||
- Keep documentation organized and centralized
|
||||
|
||||
- **Deployment Documentation**: Comprehensive guides for microservice deployment
|
||||
- `docs/DEPLOYMENT-GUIDE.md` - Complete deployment procedures, troubleshooting, rollback
|
||||
- `docs/DEPLOYMENT-QUICKSTART.md` - Fast reference for quick deployments
|
||||
- Both guides include standardized git configuration verification steps
|
||||
- Deployment workflow: Build → Push → SSH to production → Pull → Deploy → Verify
|
||||
|
||||
## Browser-Only Access Restriction
|
||||
|
||||
**Context**: HiveOps Incident Management is restricted to only work through HiveOps Browser application, not direct web browser access.
|
||||
|
||||
### Implementation Pattern (Dual-Layer Security)
|
||||
|
||||
1. **Nginx Layer** (`instances/services/nginx/conf.d/default.conf`):
|
||||
- **API/Agent endpoints** (`^/(api|atm|actuator)/`): NO browser check - agents don't have browser headers
|
||||
- **Frontend** (`/`): Browser check required - serves blocked page if unauthorized
|
||||
- Location: `/blocked.html` serves `instances/services/nginx/conf.d/blocked.html`
|
||||
- **IMPORTANT**: Agents run on ATMs without HiveOps Browser headers - nginx must allow `/api/`, `/atm/`, `/actuator/` paths through without browser checks
|
||||
|
||||
2. **Backend Layer** (Java Spring Boot):
|
||||
- `BrowserOnlyFilter.java` - Servlet filter that checks headers, allows `/api/**`, `/atm/**`, `/actuator/**`
|
||||
- Registered in `SecurityConfig.java` via `.addFilterBefore()`
|
||||
- Spring Security handles authentication:
|
||||
- `/api/**` - requires JWT authentication (`hasRole("USER")`)
|
||||
- `/atm/**` - allows unauthenticated access (`permitAll`) for agent communication
|
||||
- `/actuator/health`, `/actuator/info` - public endpoints
|
||||
|
||||
### HiveOps Browser Header Injection
|
||||
|
||||
**File**: `hiveops-browser/src/main/main.js` — attached to `incidentView.webContents.session`
|
||||
|
||||
Injects `X-HiveOps-Browser`, `X-HiveOps-Browser-Version`, `User-Agent` suffix, and `Authorization: Bearer <token>` (from `authManager.getToken()`). See "Browser JWT Injection" section below for full details.
|
||||
|
||||
### Critical Nginx Configuration Pattern
|
||||
|
||||
**Always use DNS resolver + variables for proxy_pass**:
|
||||
```nginx
|
||||
# Docker DNS resolver (prevents "host not found" errors at startup)
|
||||
resolver 127.0.0.11 valid=30s;
|
||||
|
||||
location / {
|
||||
set $backend "service-name:port";
|
||||
proxy_pass http://$backend/;
|
||||
# ... proxy headers
|
||||
}
|
||||
```
|
||||
|
||||
**Why**: Nginx resolves hostnames in `proxy_pass` at config load time. If services aren't ready, nginx fails to start. Using variables defers DNS resolution to request time.
|
||||
|
||||
### Files Modified for Browser Restriction
|
||||
|
||||
- `hiveops-openmetal/instances/services/nginx/conf.d/default.conf` - nginx config with browser checks
|
||||
- `hiveops-openmetal/instances/services/nginx/conf.d/blocked.html` - blocked access page
|
||||
- `hiveops-incident/backend/.../filter/BrowserOnlyFilter.java` - backend filter
|
||||
- `hiveops-incident/backend/.../config/SecurityConfig.java` - filter registration
|
||||
- `hiveops-browser/src/main/main.js` - header injection
|
||||
|
||||
### Spring Security 6.x Authentication Issue (RESOLVED 2026-02-16)
|
||||
|
||||
**Problem**: `AnonymousAuthenticationFilter` overwrites custom authentication, causing 403 errors even when authentication is successfully set.
|
||||
|
||||
**Solution**: Disable anonymous authentication in SecurityConfig:
|
||||
```java
|
||||
http.anonymous(anonymous -> anonymous.disable())
|
||||
```
|
||||
Use `hasRole("USER")` instead of `.authenticated()` for authorization rules.
|
||||
|
||||
### Nginx Proxy Path Issue (RESOLVED 2026-02-16)
|
||||
|
||||
**Problem**: Simple `proxy_pass http://$backend/;` with `location /incident/` doesn't correctly forward paths with HTTP/2 - all requests arrive as `GET /`.
|
||||
|
||||
**Solution**: Use regex location to capture and forward the full path:
|
||||
```nginx
|
||||
location ~ ^/incident/(.*)$ {
|
||||
set $backend "hiveops-incident:8081";
|
||||
set $incident_path /$1;
|
||||
proxy_pass http://$backend$incident_path$is_args$args;
|
||||
proxy_pass_request_headers on;
|
||||
}
|
||||
```
|
||||
|
||||
### Agent 405 Errors - Nginx Method/Browser Check Issue (RESOLVED 2026-02-16)
|
||||
|
||||
**Problem**: Agents getting 405 Method Not Allowed errors when calling:
|
||||
- `/api/atms/config/sync` (POST)
|
||||
- `/atm/fm/modules/{country}/{atm}` (POST)
|
||||
|
||||
**Root Causes**:
|
||||
1. Global CORS headers restricted all methods to `GET, OPTIONS` on `incident.bcos.cloud`
|
||||
2. Nginx was enforcing HiveOps Browser header check on API/agent endpoints
|
||||
|
||||
**Solution**:
|
||||
1. Remove global CORS method restrictions from server block
|
||||
2. Create separate location blocks:
|
||||
- `^/(api|atm|actuator)/` → NO browser check (agents don't have headers), allow all methods
|
||||
- `/` → Browser check required (frontend only), restrictive CORS
|
||||
3. Backend Spring Security handles authentication (JWT for `/api/**`, permitAll for `/atm/**`)
|
||||
|
||||
### JWT Session Expiry Bug (RESOLVED 2026-02-26)
|
||||
|
||||
**Problem**: Users see 403 on incident page approximately every 4 hours.
|
||||
|
||||
**Root cause**:
|
||||
1. `auth-manager.js:getToken()` returned expired tokens without checking expiry
|
||||
2. `main.js` injected expired `Authorization: Bearer <token>` into all incident view requests
|
||||
3. Backend's `JwtAuthenticationFilter` rejected invalid token → no authentication set
|
||||
4. Spring Security with `anonymous.disable()` returned **403** instead of 401
|
||||
|
||||
**Fix 1 — `hiveops-browser/src/main/auth-manager.js`**:
|
||||
```javascript
|
||||
getToken() {
|
||||
if (!this.isAuthenticated()) { return null; } // ← added expiry check
|
||||
const auth = this.getAuth();
|
||||
return auth && auth.token ? auth.token : null;
|
||||
}
|
||||
```
|
||||
|
||||
**Fix 2 — `hiveops-incident/backend/.../config/SecurityConfig.java`**:
|
||||
```java
|
||||
.exceptionHandling(ex -> ex
|
||||
.authenticationEntryPoint(new HttpStatusEntryPoint(HttpStatus.UNAUTHORIZED))
|
||||
)
|
||||
```
|
||||
Unauthenticated requests now return **401** (not 403) so clients can distinguish session expiry from access denied.
|
||||
|
||||
**Note**: 4-hour session expiry is hardcoded in `auth-manager.js:storeAuth()`. Background validator checks every 15 minutes and forces re-login on expiry.
|
||||
|
||||
### Browser JWT Injection
|
||||
|
||||
**File**: `hiveops-browser/src/main/main.js` — `incidentView.webContents.session.webRequest.onBeforeSendHeaders`
|
||||
|
||||
Injects into the incident view (not all requests):
|
||||
- `X-HiveOps-Browser: true`
|
||||
- `X-HiveOps-Browser-Version: <version>`
|
||||
- `User-Agent: ... HiveOps/<version>`
|
||||
- `Authorization: Bearer <token>` (from `authManager.getToken()`)
|
||||
|
||||
### Browser Release Workflow
|
||||
|
||||
```bash
|
||||
# Run from /source/hiveops-src/hiveops-browser/
|
||||
./build-all.sh
|
||||
# Auto-bumps patch version, builds all platforms (Linux + Windows via Wine),
|
||||
# copies installers to ../hiveops-openmetal/hiveops/instances/browser/downloads/
|
||||
|
||||
# SCP installers to CDN server
|
||||
scp "downloads/HiveOps Browser Setup X.X.XX.exe" \
|
||||
"downloads/HiveOps Browser-X.X.XX.AppImage" \
|
||||
"downloads/hiveops-browser_X.X.XX_amd64.deb" \
|
||||
hiveops@173.231.252.43:~/hiveops/hiveops-openmetal/instances/browser/downloads/
|
||||
|
||||
# Run release script on CDN server
|
||||
ssh hiveops@173.231.252.43
|
||||
cd ~/hiveops/hiveops-openmetal/instances/browser
|
||||
./scripts/release-browser.sh X.X.XX \
|
||||
"HiveOps Browser Setup X.X.XX.exe" \
|
||||
"HiveOps Browser-X.X.XX.AppImage" \
|
||||
"hiveops-browser_X.X.XX_amd64.deb"
|
||||
|
||||
# Commit version bump in hiveops-browser, commit downloads + CLAUDE.md in hiveops-openmetal
|
||||
```
|
||||
|
||||
**Latest released version: 2.0.47**
|
||||
|
||||
### Deployment
|
||||
|
||||
```bash
|
||||
# SSH user for production servers
|
||||
ssh hiveops@173.231.252.40
|
||||
|
||||
# Instance 1 (Services) deployment path — same on both local and production
|
||||
~/hiveops/hiveops-openmetal/instances/services/
|
||||
|
||||
# After nginx config changes, always copy from local:
|
||||
scp /source/hiveops-src/hiveops-openmetal/hiveops/instances/services/nginx/conf.d/default.conf \
|
||||
hiveops@173.231.252.40:~/hiveops/hiveops-openmetal/instances/services/nginx/conf.d/
|
||||
|
||||
# Restart services
|
||||
cd ~/hiveops/hiveops-openmetal/instances/services
|
||||
docker compose restart nginx
|
||||
docker compose up -d hiveops-incident
|
||||
```
|
||||
|
||||
### Server IPs
|
||||
|
||||
- Services (`incident.bcos.cloud`, `api.bcos.cloud`, etc.) → `173.231.252.40`
|
||||
- CDN (`cdn.bcos.cloud`, `bcos.cloud`) → `173.231.252.43`
|
||||
- Database → `173.231.252.45`
|
||||
|
||||
For CDN work: `ssh hiveops@173.231.252.43` and `cd ~/hiveops/hiveops-openmetal/instances/browser`
|
||||
|
||||
### Reorganized Structure (2026-02-20)
|
||||
|
||||
- `hiveops/instances/services/` — was `instances/services/`
|
||||
- `hiveops/instances/browser/` — was `instances/browser/`
|
||||
- `shared/database/` — was `instances/database/`
|
||||
- `hiveops/docker-compose*.yml` — was root `docker-compose*.yml`
|
||||
- `hiveops/.env` — was root `.env`
|
||||
- `smartjournal/` — NEW, SmartJournal skeleton
|
||||
|
|
@ -0,0 +1,36 @@
|
|||
# HiveOps Project Memory
|
||||
|
||||
## Key Facts
|
||||
- CLAUDE.md created at `/source/hiveops-src/CLAUDE.md` — read this first on every session
|
||||
- Monorepo with 16 sub-projects, all in `/source/hiveops-src/`
|
||||
- Production server (services): `173.231.252.40` (SSH: `hiveops@173.231.252.40`)
|
||||
- Production DB server: `173.231.252.45`
|
||||
- Production CDN server: `173.231.252.43` (SSH: `hiveops@173.231.252.43`) — cdn.bcos.cloud + bcos.cloud (DNS confirmed)
|
||||
- Production registry: `registry.directlx.dev`
|
||||
- Local registry: `192.168.200.200:5000`
|
||||
- Production domain: `*.bcos.cloud`
|
||||
- Git user: `directlx <directlx.dev@gmail.com>`
|
||||
|
||||
## Deployment Paths (Production)
|
||||
- Services: `~/hiveops/hiveops-openmetal/instances/services/` on 173.231.252.40
|
||||
- Browser CDN: `~/hiveops/hiveops-openmetal/instances/browser/` on 173.231.252.43
|
||||
- Database: `shared/database/` on 173.231.252.45
|
||||
|
||||
## Browser CDN Release
|
||||
Use `./scripts/release-browser.sh <version> <win_exe> <linux_appimage> [deb]`
|
||||
Files go in `downloads/`, script updates `downloads/browser/latest.json`
|
||||
**No git on production — scp files directly to 173.231.252.43:**
|
||||
- Downloads dir: `~/hiveops/hiveops-openmetal/instances/browser/downloads/`
|
||||
- latest.json: `~/hiveops/hiveops-openmetal/instances/browser/downloads/browser/`
|
||||
- HTML dir: `~/hiveops/hiveops-openmetal/instances/browser/html/`
|
||||
- Files are bind-mounted; nginx serves them immediately, no restart needed
|
||||
- After nginx config change: `docker exec hiveops-cdn-nginx nginx -s reload`
|
||||
|
||||
## CDN URL Behaviour (confirmed working)
|
||||
- `bcos.cloud` → serves `html/status.html` (HiveOps Production Status — API/incident health)
|
||||
- `cdn.bcos.cloud/` → 301 to `cdn.bcos.cloud/downloads/`
|
||||
- `cdn.bcos.cloud/downloads/` → browser installer download page (reads latest.json)
|
||||
- `cdn.bcos.cloud/downloads/browser/latest.json` → current browser version manifest
|
||||
|
||||
## Sub-project CLAUDE.md locations
|
||||
- hiveops-agent, hiveops-browser, hiveops-incident, hiveops-mgmt, hiveops-tools/hiveops-generator all have their own CLAUDE.md
|
||||
|
|
@ -0,0 +1,8 @@
|
|||
# Project Memory
|
||||
|
||||
## Git Hosting by Path
|
||||
|
||||
- `/source/smart-source/*` → **Bitbucket** (not Gitea)
|
||||
- Bitbucket user: `mmgsc`
|
||||
- Example: `smart-claude` remote is `https://mmgsc@bitbucket.org/smartjournal/smart-claude.git`
|
||||
- All other projects → **Gitea** (per global CLAUDE.md)
|
||||
|
|
@ -0,0 +1,14 @@
|
|||
{
|
||||
"model": "sonnet",
|
||||
"statusLine": {
|
||||
"type": "command",
|
||||
"command": "zsh ~/.claude/statusline-command.sh"
|
||||
},
|
||||
"enabledPlugins": {
|
||||
"frontend-design@claude-plugins-official": true,
|
||||
"code-review@claude-plugins-official": true,
|
||||
"commit-commands@claude-plugins-official": true,
|
||||
"claude-md-management@claude-plugins-official": true,
|
||||
"plugin-dev@claude-plugins-official": true
|
||||
}
|
||||
}
|
||||
|
|
@ -0,0 +1,83 @@
|
|||
#!/usr/bin/env zsh
|
||||
|
||||
# Read JSON input from stdin
|
||||
input=$(cat)
|
||||
|
||||
# Extract values using jq
|
||||
model=$(echo "$input" | jq -r '.model.display_name // empty')
|
||||
dir=$(echo "$input" | jq -r '.workspace.current_dir // .cwd // empty')
|
||||
remaining=$(echo "$input" | jq -r '.context_window.remaining_percentage // empty')
|
||||
output_style=$(echo "$input" | jq -r '.output_style.name // empty')
|
||||
vim_mode=$(echo "$input" | jq -r '.vim.mode // empty')
|
||||
|
||||
# Get git information (GIT_OPTIONAL_LOCKS=0 to skip optional locks)
|
||||
branch=""
|
||||
git_status=""
|
||||
if [ -n "$dir" ] && [ -d "$dir" ]; then
|
||||
branch=$(GIT_OPTIONAL_LOCKS=0 git -C "$dir" branch --show-current 2>/dev/null)
|
||||
if [ -n "$branch" ]; then
|
||||
if GIT_OPTIONAL_LOCKS=0 git -C "$dir" diff --quiet 2>/dev/null && \
|
||||
GIT_OPTIONAL_LOCKS=0 git -C "$dir" diff --cached --quiet 2>/dev/null; then
|
||||
git_status=""
|
||||
else
|
||||
git_status="*"
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
|
||||
# --- Build status line styled after your zsh PROMPT ---
|
||||
# PROMPT='%F{cyan}%n@%m %F{yellow}%1~ %F{green}$(git_prompt_info)%f %# '
|
||||
#
|
||||
# user@host in cyan | last dir component in yellow | git branch in green
|
||||
# followed by: model, context %, output style, vim mode
|
||||
|
||||
# user@host in cyan
|
||||
user_host="$(whoami)@$(hostname -s)"
|
||||
output="\033[36m${user_host}\033[0m"
|
||||
|
||||
# Last directory component in yellow
|
||||
if [ -n "$dir" ]; then
|
||||
short_dir=$(basename "$dir")
|
||||
output="$output \033[33m${short_dir}\033[0m"
|
||||
fi
|
||||
|
||||
# Git branch in green (matches git_prompt_info style)
|
||||
if [ -n "$branch" ]; then
|
||||
output="$output \033[32m(${branch}${git_status})\033[0m"
|
||||
fi
|
||||
|
||||
# Separator before Claude-specific info
|
||||
output="$output \033[90m|\033[0m"
|
||||
|
||||
# Model name in green
|
||||
if [ -n "$model" ]; then
|
||||
output="$output \033[32m${model}\033[0m"
|
||||
fi
|
||||
|
||||
# Output style in purple (if not default)
|
||||
if [ -n "$output_style" ] && [ "$output_style" != "default" ]; then
|
||||
output="$output \033[35m[${output_style}]\033[0m"
|
||||
fi
|
||||
|
||||
# Context remaining percentage
|
||||
if [ -n "$remaining" ]; then
|
||||
remaining_int=$(printf "%.0f" "$remaining")
|
||||
if [ "$remaining_int" -lt 20 ]; then
|
||||
# Red if low
|
||||
output="$output \033[31mctx:${remaining_int}%\033[0m"
|
||||
else
|
||||
# Yellow otherwise
|
||||
output="$output \033[33mctx:${remaining_int}%\033[0m"
|
||||
fi
|
||||
fi
|
||||
|
||||
# Vim mode indicator if enabled
|
||||
if [ -n "$vim_mode" ]; then
|
||||
if [ "$vim_mode" = "INSERT" ]; then
|
||||
output="$output \033[32m[I]\033[0m"
|
||||
else
|
||||
output="$output \033[36m[N]\033[0m"
|
||||
fi
|
||||
fi
|
||||
|
||||
printf "%b\n" "$output"
|
||||
Loading…
Reference in New Issue