fixed erase cvs facility/name
This commit is contained in:
parent
dba0c9477f
commit
e5d30469bc
5 changed files with 217 additions and 15 deletions
|
|
@ -0,0 +1,189 @@
|
|||
Yes — a two-buffer scheme is a good start, but it only **reduces exposure**. It does not make FAT-on-SD immune to mid-write interruption.
|
||||
|
||||
What I mean by “immune” is narrower and more mechanical:
|
||||
|
||||
> after a brownout, reset, card glitch, or socket bounce, the logger may lose only the record being written, but it does not corrupt earlier records or the directory structure.
|
||||
|
||||
Espressif’s own guidance points in this direction. Their file-system guidance says LittleFS is designed for microcontrollers, has power-failure resilience, and does wear levelling; their FAQ also says FatFs does not support write transactions and accidental power loss can damage partitions, so mitigation has to happen at the application level. ([Espressif Systems][1])
|
||||
|
||||
For your use case, I would separate the options into three tiers.
|
||||
|
||||
## Best practical approach for this project
|
||||
|
||||
### Append-only fixed-record journal in internal flash
|
||||
|
||||
Instead of writing CSV lines to FAT, write fixed-size binary records into a reserved flash partition as a ring buffer or append-only log.
|
||||
|
||||
Each record would contain something like:
|
||||
|
||||
* magic number
|
||||
* record version
|
||||
* sequence number
|
||||
* monotonic ms since boot/run
|
||||
* payload length
|
||||
* payload
|
||||
* CRC32
|
||||
|
||||
And the write rule would be:
|
||||
|
||||
1. prepare complete record in RAM
|
||||
2. write one contiguous record to next flash slot
|
||||
3. verify CRC on readback if desired
|
||||
4. advance write pointer only after success
|
||||
|
||||
On reboot, the parser scans forward until:
|
||||
|
||||
* bad magic
|
||||
* bad CRC
|
||||
* impossible length
|
||||
* erased flash pattern
|
||||
|
||||
Then it stops. Everything before that point is valid.
|
||||
|
||||
That is the model I would call “near-immune.” A power cut may destroy the last record, but not the whole log.
|
||||
|
||||
## Why this is stronger than FAT
|
||||
|
||||
FAT has shared metadata: directory entries, FAT chains, timestamps, free-space accounting. A torn metadata write can damage the filesystem namespace, which is close to what your `fsck.vfat` output showed. FatFs itself is not transactional under accidental power loss. ([Espressif Systems][2])
|
||||
|
||||
With an append-only journal:
|
||||
|
||||
* no directory updates during logging
|
||||
* no long filename metadata
|
||||
* no cluster allocation churn
|
||||
* recovery is linear and deterministic
|
||||
|
||||
## Good filesystem-based alternative
|
||||
|
||||
### LittleFS on internal flash
|
||||
|
||||
If you still want “files,” LittleFS is the strongest ESP-IDF choice for this class of device. Espressif describes it as having a good level of power-failure resilience and dynamic wear levelling, and recommends it as a general choice for embedded applications. ([Espressif Systems][1])
|
||||
|
||||
I would use LittleFS only for:
|
||||
|
||||
* configuration
|
||||
* exported snapshots
|
||||
* occasional summary logs
|
||||
|
||||
I would still prefer a binary journal for the high-frequency raw stream.
|
||||
|
||||
## What I would not trust for raw high-rate logging
|
||||
|
||||
### FAT on SD as the primary live log
|
||||
|
||||
Even with double buffering, it still depends on:
|
||||
|
||||
* socket contact
|
||||
* card controller behavior
|
||||
* shared SPI stability
|
||||
* directory/FAT updates eventually occurring
|
||||
|
||||
So buffering improves durability, but the architecture remains fragile. Espressif’s FAQ is explicit that FatFs power-loss damage must be handled at application level. ([Espressif Systems][2])
|
||||
|
||||
# Concrete design I would recommend
|
||||
|
||||
## Partition layout
|
||||
|
||||
Reserve an internal flash partition, for example:
|
||||
|
||||
* `config` — NVS or LittleFS
|
||||
* `rawlog_a` — append journal
|
||||
* `rawlog_b` — optional mirror / rollover / generation swap
|
||||
* `export` — optional LittleFS area for human-readable dump files
|
||||
|
||||
## Record format
|
||||
|
||||
Something like:
|
||||
|
||||
```c
|
||||
struct log_record_hdr {
|
||||
uint32_t magic; // 0x54424C47, for example
|
||||
uint16_t version;
|
||||
uint16_t type; // sample, satellite, event, marker
|
||||
uint32_t seq;
|
||||
uint32_t ms_since_run_start;
|
||||
uint16_t payload_len;
|
||||
uint16_t hdr_crc;
|
||||
uint32_t payload_crc;
|
||||
};
|
||||
```
|
||||
|
||||
Followed by compact binary payload.
|
||||
|
||||
That gives you:
|
||||
|
||||
* forward compatibility
|
||||
* corruption detection
|
||||
* clean stop-point on reboot
|
||||
|
||||
## Runtime behavior
|
||||
|
||||
* Sample task writes into RAM buffer A
|
||||
* When A fills, swap to B
|
||||
* Background flush task writes A as a batch of complete records
|
||||
* Never rewrite old records in place
|
||||
* Never “update header counts” during the run
|
||||
* At end of run, optionally write a single end marker record
|
||||
|
||||
## Export behavior
|
||||
|
||||
After the run, or on explicit command:
|
||||
|
||||
* convert binary journal to CSV
|
||||
* stream out over USB serial, Wi-Fi, or copy to SD if present
|
||||
* if SD fails, raw evidence still survives in flash
|
||||
|
||||
That is the key architectural shift:
|
||||
|
||||
> log safely first, prettify later
|
||||
|
||||
# If you want redundancy
|
||||
|
||||
You mentioned Codex designed a two-buffer system. I would reinterpret “two-buffer” at the storage layer too:
|
||||
|
||||
### Option A — dual-generation partitions
|
||||
|
||||
Write runs alternately to `rawlog_a` and `rawlog_b`, each with a generation counter in a superblock. On boot, choose the highest valid generation with good CRC. This is similar in spirit to Espressif’s own “two copies” suggestion for handling FatFs power-loss exposure. ([Espressif Systems][2])
|
||||
|
||||
### Option B — mirrored commit blocks
|
||||
|
||||
For metadata like run header, end marker, and export manifest, store two copies with generation numbers and CRCs. Never rely on a single rewritten sector.
|
||||
|
||||
# Wear concern
|
||||
|
||||
The obvious objection is flash wear. That is real, but manageable if:
|
||||
|
||||
* records are compact
|
||||
* you use a ring buffer
|
||||
* you erase in blocks and append sequentially
|
||||
* raw logs are treated as field-session data, not archival forever
|
||||
|
||||
Espressif’s storage guidance explicitly highlights wear levelling for embedded filesystems like LittleFS and SPIFFS. ([Espressif Systems][1])
|
||||
|
||||
# My ranking for your T-Beam logger
|
||||
|
||||
1. **Best:** raw append-only binary journal in internal flash, then export later
|
||||
2. **Good:** LittleFS on internal flash for modest-rate logging
|
||||
3. **Acceptable only as secondary/export:** FAT on SD
|
||||
4. **Worst:** live CSV append to FAT on SD with frequent closes
|
||||
|
||||
# My blunt answer
|
||||
|
||||
A logging approach that is “immune” in the engineering sense would be:
|
||||
|
||||
> an append-only, CRC-protected, fixed-record journal in internal flash, with no filesystem metadata churn during capture.
|
||||
|
||||
That will not make physics go away, but it changes failure from:
|
||||
|
||||
* “filesystem damaged, maybe whole log compromised”
|
||||
|
||||
to:
|
||||
|
||||
* “last partial record lost; all prior records intact.”
|
||||
|
||||
That is a much better failure mode.
|
||||
|
||||
If you want, I’ll sketch the exact partition table and a Perl-friendly binary record schema so your post-run converter can turn the journal into the same CSV structure you already import.
|
||||
|
||||
[1]: https://docs.espressif.com/projects/esp-idf/en/stable/esp32/api-guides/file-system-considerations.html?utm_source=chatgpt.com "File System Considerations - ESP32 - — ESP-IDF ..."
|
||||
[2]: https://docs.espressif.com/projects/esp-faq/en/latest/esp-faq-en-master.pdf?utm_source=chatgpt.com "ESP-FAQ Handbook - Technical Documents"
|
||||
|
|
@ -24,7 +24,8 @@ Primary serial commands:
|
|||
- `start`
|
||||
- `flush`
|
||||
- `discipline`
|
||||
- `erase_logs`
|
||||
- `erase_csv`
|
||||
- `erase_logs` (compatibility alias)
|
||||
|
||||
Notes:
|
||||
|
||||
|
|
|
|||
|
|
@ -619,16 +619,21 @@ void StorageManager::catFile(Stream& out, const char* path) {
|
|||
file.close();
|
||||
}
|
||||
|
||||
void StorageManager::eraseLogsRecursive(File& dir) {
|
||||
void StorageManager::eraseCsvRecursive(File& dir, const char* parentPath) {
|
||||
File entry = dir.openNextFile();
|
||||
while (entry) {
|
||||
String path = entry.name();
|
||||
String leaf = entry.name();
|
||||
const int slash = leaf.lastIndexOf('/');
|
||||
if (slash >= 0) {
|
||||
leaf.remove(0, slash + 1);
|
||||
}
|
||||
const String path = String(parentPath) + (String(parentPath).endsWith("/") ? "" : "/") + leaf;
|
||||
const bool isDir = entry.isDirectory();
|
||||
entry.close();
|
||||
if (isDir) {
|
||||
File subdir = SD.open(path.c_str(), FILE_READ);
|
||||
if (subdir) {
|
||||
eraseLogsRecursive(subdir);
|
||||
eraseCsvRecursive(subdir, path.c_str());
|
||||
subdir.close();
|
||||
}
|
||||
} else if (isRecognizedLogName(path)) {
|
||||
|
|
@ -638,20 +643,25 @@ void StorageManager::eraseLogsRecursive(File& dir) {
|
|||
}
|
||||
}
|
||||
|
||||
void StorageManager::eraseLogs(Stream& out) {
|
||||
void StorageManager::eraseCsv(Stream& out) {
|
||||
if (!mounted()) {
|
||||
out.println("storage not mounted");
|
||||
return;
|
||||
}
|
||||
close();
|
||||
File dir = SD.open(kLogDir, FILE_READ);
|
||||
if (!dir || !dir.isDirectory()) {
|
||||
out.println("log directory unavailable");
|
||||
dir.close();
|
||||
return;
|
||||
}
|
||||
eraseLogsRecursive(dir);
|
||||
eraseCsvRecursive(dir, kLogDir);
|
||||
dir.close();
|
||||
out.println("logs erased");
|
||||
out.println("csv files erased");
|
||||
}
|
||||
|
||||
void StorageManager::eraseLogs(Stream& out) {
|
||||
eraseCsv(out);
|
||||
}
|
||||
|
||||
bool StorageManager::eraseFile(const char* path) {
|
||||
|
|
|
|||
|
|
@ -33,6 +33,7 @@ class StorageManager {
|
|||
void close();
|
||||
void listFiles(Stream& out);
|
||||
void catFile(Stream& out, const char* path);
|
||||
void eraseCsv(Stream& out);
|
||||
void eraseLogs(Stream& out);
|
||||
bool eraseFile(const char* path);
|
||||
bool normalizePath(const char* input, String& normalized) const;
|
||||
|
|
@ -49,7 +50,7 @@ class StorageManager {
|
|||
bool writeFully(const uint8_t* data, size_t len, const char* context);
|
||||
size_t countLogsRecursive(const char* path) const;
|
||||
void listFilesRecursive(File& dir, Stream& out);
|
||||
void eraseLogsRecursive(File& dir);
|
||||
void eraseCsvRecursive(File& dir, const char* parentPath);
|
||||
|
||||
bool m_ready = false;
|
||||
bool m_newFile = false;
|
||||
|
|
|
|||
|
|
@ -637,6 +637,7 @@ void handleWebIndex() {
|
|||
html += "<a href='/cmd?start=1'>start</a> ";
|
||||
html += "<a href='/cmd?stop=1'>stop</a> ";
|
||||
html += "<a href='/cmd?sd_rescan=1'>sd_rescan</a> ";
|
||||
html += "<a href='/cmd?erase_csv=1'>erase_csv</a> ";
|
||||
html += "<a href='/cmd?erase_logs=1'>erase_logs</a></p>";
|
||||
html += "<h2>SD Tree</h2><ul>";
|
||||
|
||||
|
|
@ -704,15 +705,15 @@ void handleWebCommand() {
|
|||
} else {
|
||||
response = String("erase failed: ") + g_storage.lastError();
|
||||
}
|
||||
} else if (g_server.hasArg("erase_logs")) {
|
||||
} else if (g_server.hasArg("erase_csv") || g_server.hasArg("erase_logs")) {
|
||||
(void)g_storage.flush();
|
||||
g_storage.eraseLogs(Serial);
|
||||
g_storage.eraseCsv(Serial);
|
||||
g_storageReady = g_storage.ready();
|
||||
if (!g_storageReady) {
|
||||
g_loggingEnabled = false;
|
||||
}
|
||||
g_logFileCount = g_storage.logFileCount();
|
||||
response = "logs erased";
|
||||
response = "csv files erased";
|
||||
} else if (g_server.hasArg("flush")) {
|
||||
response = g_storage.flush() ? "buffer flushed" : String("flush failed: ") + g_storage.lastError();
|
||||
} else if (g_server.hasArg("stop")) {
|
||||
|
|
@ -749,7 +750,7 @@ void handleWebCommand() {
|
|||
response += "\nhalt_reason=";
|
||||
response += g_lastHaltReason;
|
||||
} else {
|
||||
response = "commands: status flush start stop sd_rescan erase=<path> erase_logs=1";
|
||||
response = "commands: status flush start stop sd_rescan erase=<path> erase_csv=1";
|
||||
}
|
||||
|
||||
g_server.send(200, "text/plain; charset=utf-8", response);
|
||||
|
|
@ -829,8 +830,8 @@ void handleCommand(const char* line) {
|
|||
} else {
|
||||
Serial.printf("erase failed: %s\n", g_storage.lastError());
|
||||
}
|
||||
} else if (strcasecmp(line, "erase_logs") == 0) {
|
||||
g_storage.eraseLogs(Serial);
|
||||
} else if (strcasecmp(line, "erase_csv") == 0 || strcasecmp(line, "erase_logs") == 0) {
|
||||
g_storage.eraseCsv(Serial);
|
||||
g_storageReady = g_storage.ready();
|
||||
if (!g_storageReady) {
|
||||
g_loggingEnabled = false;
|
||||
|
|
@ -841,7 +842,7 @@ void handleCommand(const char* line) {
|
|||
g_lastDisciplineAttemptMs = 0;
|
||||
Serial.println("clock discipline requested");
|
||||
} else {
|
||||
Serial.println("commands: status quiet verbose flush start stop sd_rescan summary ls cat <path> erase <path> erase_logs discipline");
|
||||
Serial.println("commands: status quiet verbose flush start stop sd_rescan summary ls cat <path> erase <path> erase_csv discipline");
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue