41 KiB
Comprehensive Media Caching Architecture
Overview
This document outlines the architectural design for implementing DR-016 (Image caching and sync with server) and enhancing DR-012 (Local database for media metadata cache).
Goal: Create a comprehensive offline-first architecture that caches all media assets (images, metadata, artwork) locally for instant loading, offline access, and reduced server load.
Scope: All Cacheable Assets
- Images: Posters, backdrops, title cards, logos, thumbnails, banners, profile pictures, disc art
- Metadata: Media items, libraries, collections, people/cast, genres, studios
- User Data: Watch progress, favorites, ratings, playlists
- Media Info: Subtitle/audio track information, chapters, media streams
Current State
✅ Already Implemented
-
Database Schema (schema.rs:227-237):
thumbnailstable with fields:item_id,image_type,image_tag,file_path,width,height,cached_atitemstable stores metadata includingprimary_image_tag- Index on
item_idfor fast lookups
-
Data Models (models.rs:358-368):
Thumbnailstruct matches database schema
-
Metadata Storage:
itemstable stores full media metadatauser_datatable stores playback progress and favorites
❌ Not Yet Implemented
- Image Download & Caching Service: No code to download and cache images
- Cache Invalidation: No logic to check
image_tagfor updates - LRU Eviction: No automatic cleanup of old thumbnails
- Repository Integration: Repository pattern doesn't use cached images
- Tauri Commands: No commands to manage thumbnail cache
Architectural Design
1. Media Cache Service
Location: src-tauri/src/cache/
src-tauri/src/cache/
├── mod.rs # Module exports, MediaCacheService
├── images/
│ ├── mod.rs # ImageCacheService
│ ├── download.rs # Image download with retry logic
│ ├── formats.rs # Image format conversion (WebP, AVIF)
│ └── preloader.rs # Intelligent pre-caching
├── metadata/
│ ├── mod.rs # MetadataCacheService
│ ├── sync.rs # Sync with Jellyfin server
│ └── stale.rs # Stale-while-revalidate strategy
└── lru.rs # LRU eviction policy (shared)
1.1 ImageCacheService - All Image Types
Supported Image Types (from Jellyfin API):
Primary: Poster/cover art (movies, albums, shows)Backdrop: Background imagesLogo: Transparent logos for overlaysThumb: Thumbnail preview framesBanner: Wide banner imagesArt: Disc/box artScreenshot: Episode screenshotsProfile: Actor/person headshots
1.1.1 Core Service
pub struct ImageCacheService {
db: Arc<Database>,
cache_dir: PathBuf,
client: reqwest::Client,
config: CacheConfig,
}
pub struct CacheConfig {
pub max_cache_size_mb: u64, // Default: 500 MB
pub max_age_days: u32, // Default: 30 days
pub quality: ImageQuality, // Default: High
}
pub enum ImageQuality {
Low, // 300px
Medium, // 720px
High, // 1080px
Original, // No resize
}
impl ImageCacheService {
/// Get cached image path or download if missing
pub async fn get_image(
&self,
item_id: &str,
image_type: &str,
image_tag: Option<&str>,
width: Option<u32>,
height: Option<u32>,
) -> Result<PathBuf, CacheError> {
// 1. Check database for existing cache entry
if let Some(cached) = self.db.get_thumbnail(item_id, image_type, image_tag).await? {
// Verify file still exists
if cached.file_path.exists() {
// Update last_accessed for LRU
self.db.touch_thumbnail(cached.id).await?;
return Ok(cached.file_path);
} else {
// File deleted externally, remove DB entry
self.db.delete_thumbnail(cached.id).await?;
}
}
// 2. Download image from Jellyfin server
let image_data = self.download_image(item_id, image_type, width, height).await?;
// 3. Save to disk
let file_path = self.save_image(item_id, image_type, image_tag, &image_data).await?;
// 4. Insert into database
let thumbnail = Thumbnail {
id: None,
item_id: item_id.to_string(),
image_type: image_type.to_string(),
image_tag: image_tag.unwrap_or("").to_string(),
file_path: file_path.clone(),
width: width.map(|w| w as i32),
height: height.map(|h| h as i32),
cached_at: Some(Utc::now()),
last_accessed: Some(Utc::now()),
};
self.db.insert_thumbnail(&thumbnail).await?;
// 5. Check cache size and evict if needed
self.evict_if_needed().await?;
Ok(file_path)
}
/// Check if image is cached and valid
pub async fn is_cached(
&self,
item_id: &str,
image_type: &str,
image_tag: Option<&str>,
) -> Result<bool, CacheError> {
if let Some(cached) = self.db.get_thumbnail(item_id, image_type, image_tag).await? {
// Verify tag matches (cache invalidation)
if let Some(tag) = image_tag {
if cached.image_tag != tag {
// Tag changed, image updated on server
self.db.delete_thumbnail(cached.id).await?;
return Ok(false);
}
}
// Verify file exists
return Ok(cached.file_path.exists());
}
Ok(false)
}
/// Pre-cache images for a batch of items (e.g., library grid)
pub async fn precache_batch(
&self,
items: &[CacheRequest],
priority: CachePriority,
) -> Result<(), CacheError> {
// Download images in parallel with concurrency limit
let futures = items.iter().map(|req| {
self.get_image(
&req.item_id,
&req.image_type,
req.image_tag.as_deref(),
req.width,
req.height,
)
});
// Use buffered stream to limit concurrency (e.g., 5 at a time)
futures::stream::iter(futures)
.buffer_unordered(5)
.try_collect::<Vec<_>>()
.await?;
Ok(())
}
/// Evict old/unused thumbnails when cache size exceeds limit
async fn evict_if_needed(&self) -> Result<(), CacheError> {
let cache_size = self.get_cache_size().await?;
let max_size = self.config.max_cache_size_mb * 1024 * 1024;
if cache_size > max_size {
// Get thumbnails sorted by last_accessed (LRU)
let to_evict = self.db.get_lru_thumbnails(100).await?;
let mut freed = 0u64;
for thumb in to_evict {
if cache_size - freed <= max_size {
break;
}
// Delete file
if let Ok(metadata) = std::fs::metadata(&thumb.file_path) {
freed += metadata.len();
std::fs::remove_file(&thumb.file_path)?;
}
// Delete DB entry
self.db.delete_thumbnail(thumb.id).await?;
}
}
Ok(())
}
}
pub struct CacheRequest {
pub item_id: String,
pub image_type: String,
pub image_tag: Option<String>,
pub width: Option<u32>,
pub height: Option<u32>,
}
pub enum CachePriority {
High, // User navigated to this screen
Medium, // Prefetch for upcoming content
Low, // Background cache warming
}
1.2 Database Queries
Location: src-tauri/src/storage/queries/thumbnails.rs
impl Database {
pub async fn get_thumbnail(
&self,
item_id: &str,
image_type: &str,
image_tag: Option<&str>,
) -> Result<Option<Thumbnail>> {
let conn = self.pool.get().await?;
let query = if let Some(tag) = image_tag {
"SELECT * FROM thumbnails
WHERE item_id = ? AND image_type = ? AND image_tag = ?"
} else {
"SELECT * FROM thumbnails
WHERE item_id = ? AND image_type = ?"
};
// Execute query and return Thumbnail
}
pub async fn insert_thumbnail(&self, thumbnail: &Thumbnail) -> Result<i64> {
// INSERT INTO thumbnails...
}
pub async fn touch_thumbnail(&self, id: i64) -> Result<()> {
// UPDATE thumbnails SET last_accessed = CURRENT_TIMESTAMP WHERE id = ?
}
pub async fn get_lru_thumbnails(&self, limit: usize) -> Result<Vec<Thumbnail>> {
// SELECT * FROM thumbnails
// ORDER BY last_accessed ASC
// LIMIT ?
}
pub async fn delete_thumbnail(&self, id: i64) -> Result<()> {
// DELETE FROM thumbnails WHERE id = ?
}
pub async fn get_cache_size(&self) -> Result<u64> {
// SELECT SUM(file_size) FROM thumbnails
// Or calculate from filesystem
}
}
Schema Enhancement (add to migration):
-- Add last_accessed column for LRU
ALTER TABLE thumbnails ADD COLUMN last_accessed TEXT DEFAULT CURRENT_TIMESTAMP;
-- Add file_size for cache size calculation
ALTER TABLE thumbnails ADD COLUMN file_size INTEGER;
-- Create index for LRU queries
CREATE INDEX IF NOT EXISTS idx_thumbnails_lru ON thumbnails(last_accessed ASC);
2. Repository Integration
2.1 Enhanced getImageUrl()
Location: src/lib/api/repository.ts
export class OnlineRepository implements MediaRepository {
private imageCacheEnabled = true;
async getImageUrl(
itemId: string,
imageType: string,
options: ImageOptions = {}
): Promise<string> {
const { maxWidth, maxHeight, tag } = options;
if (this.imageCacheEnabled) {
// Check if cached locally via Tauri command
try {
const cachedPath = await invoke<string | null>('cache_get_image', {
itemId,
imageType,
imageTag: tag,
width: maxWidth,
height: maxHeight,
});
if (cachedPath) {
// Return file:// URL for local cached image
return `file://${cachedPath}`;
}
} catch (err) {
console.warn('Cache lookup failed, falling back to server URL:', err);
}
}
// Fallback to server URL (will be cached in background)
return this.buildImageUrl(itemId, imageType, options);
}
private buildImageUrl(itemId: string, imageType: string, options: ImageOptions): string {
const params = new URLSearchParams();
if (options.maxWidth) params.set('maxWidth', options.maxWidth.toString());
if (options.maxHeight) params.set('maxHeight', options.maxHeight.toString());
if (options.tag) params.set('tag', options.tag);
return `${this.baseUrl}/Items/${itemId}/Images/${imageType}?${params}`;
}
}
2.2 Background Pre-caching
Location: src/lib/services/imagePreloader.ts
export class ImagePreloader {
private precacheQueue: Set<string> = new Set();
private processing = false;
/**
* Pre-cache images for items in view
* Called when user navigates to library/album/detail pages
*/
async precacheVisible(items: MediaItem[]): Promise<void> {
const requests = items
.filter(item => item.primaryImageTag)
.map(item => ({
itemId: item.id,
imageType: 'Primary',
imageTag: item.primaryImageTag,
width: 400, // Medium quality for grids
height: 600,
}));
try {
await invoke('cache_precache_batch', { requests, priority: 'high' });
} catch (err) {
console.error('Precache failed:', err);
}
}
/**
* Pre-cache upcoming queue items (for video player)
*/
async precacheQueue(items: MediaItem[]): Promise<void> {
const requests = items
.slice(0, 5) // Next 5 items
.filter(item => item.primaryImageTag)
.map(item => ({
itemId: item.id,
imageType: 'Primary',
imageTag: item.primaryImageTag,
width: 1920,
height: 1080, // Full quality for video player
}));
try {
await invoke('cache_precache_batch', { requests, priority: 'medium' });
} catch (err) {
console.error('Queue precache failed:', err);
}
}
}
// Auto-initialize in app
export const imagePreloader = new ImagePreloader();
Usage in VideoPlayer:
// In VideoPlayer.svelte
import { imagePreloader } from '$lib/services/imagePreloader';
onMount(() => {
// Pre-cache poster for next video in queue
if (nextInQueue) {
imagePreloader.precacheQueue([nextInQueue]);
}
});
3. Tauri Commands
Location: src-tauri/src/commands/cache.rs
use crate::cache::ImageCacheService;
#[tauri::command]
pub async fn cache_get_image(
item_id: String,
image_type: String,
image_tag: Option<String>,
width: Option<u32>,
height: Option<u32>,
cache_service: State<'_, Arc<ImageCacheService>>,
) -> Result<Option<String>, String> {
let path = cache_service
.get_image(&item_id, &image_type, image_tag.as_deref(), width, height)
.await
.map_err(|e| e.to_string())?;
Ok(Some(path.to_string_lossy().to_string()))
}
#[tauri::command]
pub async fn cache_is_cached(
item_id: String,
image_type: String,
image_tag: Option<String>,
cache_service: State<'_, Arc<ImageCacheService>>,
) -> Result<bool, String> {
cache_service
.is_cached(&item_id, &image_type, image_tag.as_deref())
.await
.map_err(|e| e.to_string())
}
#[tauri::command]
pub async fn cache_precache_batch(
requests: Vec<CacheRequest>,
priority: String,
cache_service: State<'_, Arc<ImageCacheService>>,
) -> Result<(), String> {
let priority = match priority.as_str() {
"high" => CachePriority::High,
"medium" => CachePriority::Medium,
_ => CachePriority::Low,
};
cache_service
.precache_batch(&requests, priority)
.await
.map_err(|e| e.to_string())
}
#[tauri::command]
pub async fn cache_clear(
cache_service: State<'_, Arc<ImageCacheService>>,
) -> Result<(), String> {
cache_service
.clear_all()
.await
.map_err(|e| e.to_string())
}
#[tauri::command]
pub async fn cache_get_stats(
cache_service: State<'_, Arc<ImageCacheService>>,
) -> Result<CacheStats, String> {
cache_service
.get_stats()
.await
.map_err(|e| e.to_string())
}
#[derive(Serialize, Deserialize)]
pub struct CacheStats {
pub total_images: u64,
pub total_size_mb: f64,
pub cache_hit_rate: f64, // Percentage
}
4. Metadata Caching Enhancement
4.1 Library Response Caching
When fetching library items from Jellyfin, cache them in the items table:
// In src-tauri/src/commands/library.rs (new command)
#[tauri::command]
pub async fn library_sync_items(
library_id: String,
db: State<'_, Arc<Database>>,
jellyfin_client: State<'_, Arc<JellyfinClient>>,
) -> Result<Vec<Item>, String> {
// 1. Fetch from Jellyfin API
let api_items = jellyfin_client
.get_library_items(&library_id)
.await
.map_err(|e| e.to_string())?;
// 2. Upsert into database
for api_item in &api_items {
let db_item = convert_to_db_item(api_item);
db.upsert_item(&db_item).await.map_err(|e| e.to_string())?;
}
// 3. Return items (now available offline)
Ok(api_items)
}
4.2 Offline-First Repository
export class HybridRepository implements MediaRepository {
constructor(
private onlineRepo: OnlineRepository,
private db: Database
) {}
async getItem(itemId: string): Promise<MediaItem> {
// Try local cache first
try {
const cached = await invoke<MediaItem | null>('db_get_item', { itemId });
if (cached) {
// Refresh in background (stale-while-revalidate)
this.refreshItemInBackground(itemId);
return cached;
}
} catch (err) {
console.warn('Cache lookup failed:', err);
}
// Fetch from server and cache
const item = await this.onlineRepo.getItem(itemId);
await invoke('db_upsert_item', { item }).catch(console.error);
return item;
}
private async refreshItemInBackground(itemId: string): Promise<void> {
try {
const fresh = await this.onlineRepo.getItem(itemId);
await invoke('db_upsert_item', { item: fresh });
} catch (err) {
// Ignore, cached version is good enough
}
}
}
5. Metadata Caching Service
Location: src-tauri/src/cache/metadata/mod.rs
5.1 Comprehensive Metadata Storage
Extended Database Schema:
-- People/Cast (actors, directors, writers)
CREATE TABLE IF NOT EXISTS people (
id TEXT PRIMARY KEY,
server_id TEXT NOT NULL REFERENCES servers(id) ON DELETE CASCADE,
name TEXT NOT NULL,
role TEXT, -- Actor, Director, Writer, etc.
overview TEXT,
primary_image_tag TEXT,
birth_date TEXT,
death_date TEXT,
birth_place TEXT,
synced_at TEXT,
UNIQUE(server_id, id)
);
-- Cast/Crew associations
CREATE TABLE IF NOT EXISTS item_people (
id INTEGER PRIMARY KEY AUTOINCREMENT,
item_id TEXT NOT NULL REFERENCES items(id) ON DELETE CASCADE,
person_id TEXT NOT NULL REFERENCES people(id) ON DELETE CASCADE,
role_type TEXT NOT NULL, -- Actor, Director, Writer, Producer, etc.
role_name TEXT, -- Character name for actors
sort_order INTEGER,
UNIQUE(item_id, person_id, role_type)
);
-- Collections (Box Sets)
CREATE TABLE IF NOT EXISTS collections (
id TEXT PRIMARY KEY,
server_id TEXT NOT NULL REFERENCES servers(id) ON DELETE CASCADE,
name TEXT NOT NULL,
overview TEXT,
primary_image_tag TEXT,
backdrop_image_tags TEXT, -- JSON array
synced_at TEXT,
UNIQUE(server_id, id)
);
-- Collection membership
CREATE TABLE IF NOT EXISTS collection_items (
id INTEGER PRIMARY KEY AUTOINCREMENT,
collection_id TEXT NOT NULL REFERENCES collections(id) ON DELETE CASCADE,
item_id TEXT NOT NULL REFERENCES items(id) ON DELETE CASCADE,
sort_order INTEGER,
UNIQUE(collection_id, item_id)
);
-- Studios/Networks
CREATE TABLE IF NOT EXISTS studios (
id TEXT PRIMARY KEY,
server_id TEXT NOT NULL REFERENCES servers(id) ON DELETE CASCADE,
name TEXT NOT NULL,
overview TEXT,
primary_image_tag TEXT,
synced_at TEXT,
UNIQUE(server_id, id)
);
-- Chapters (for video scrubbing thumbnails)
CREATE TABLE IF NOT EXISTS chapters (
id INTEGER PRIMARY KEY AUTOINCREMENT,
item_id TEXT NOT NULL REFERENCES items(id) ON DELETE CASCADE,
start_position_ticks INTEGER NOT NULL,
name TEXT,
image_tag TEXT,
UNIQUE(item_id, start_position_ticks)
);
-- Genres (with metadata)
CREATE TABLE IF NOT EXISTS genres (
id TEXT PRIMARY KEY,
server_id TEXT NOT NULL REFERENCES servers(id) ON DELETE CASCADE,
name TEXT NOT NULL,
item_count INTEGER DEFAULT 0,
synced_at TEXT,
UNIQUE(server_id, name)
);
-- Create indexes for relationships
CREATE INDEX IF NOT EXISTS idx_item_people_item ON item_people(item_id);
CREATE INDEX IF NOT EXISTS idx_item_people_person ON item_people(person_id);
CREATE INDEX IF NOT EXISTS idx_collection_items_collection ON collection_items(collection_id);
CREATE INDEX IF NOT EXISTS idx_collection_items_item ON collection_items(item_id);
CREATE INDEX IF NOT EXISTS idx_chapters_item ON chapters(item_id);
5.2 MetadataCacheService
pub struct MetadataCacheService {
db: Arc<Database>,
jellyfin_client: Arc<JellyfinClient>,
sync_config: SyncConfig,
}
pub struct SyncConfig {
pub auto_sync: bool, // Auto-sync in background
pub sync_interval_hours: u32, // Default: 6 hours
pub deep_sync: bool, // Include cast, collections, etc.
pub wifi_only: bool, // Sync only on WiFi
}
impl MetadataCacheService {
/// Sync complete library metadata
pub async fn sync_library(&self, library_id: &str) -> Result<SyncReport, CacheError> {
let mut report = SyncReport::default();
// 1. Fetch all items from Jellyfin
let api_items = self.jellyfin_client.get_library_items(library_id).await?;
report.items_fetched = api_items.len();
// 2. Upsert items to database
for api_item in &api_items {
let db_item = self.convert_to_db_item(api_item);
self.db.upsert_item(&db_item).await?;
report.items_synced += 1;
// 3. Deep sync: cast/crew, collections
if self.sync_config.deep_sync {
self.sync_item_people(&api_item).await?;
self.sync_item_collections(&api_item).await?;
}
}
// 4. Update library sync timestamp
self.db.update_library_sync(library_id).await?;
Ok(report)
}
/// Sync cast/crew for an item
async fn sync_item_people(&self, item: &JellyfinItem) -> Result<(), CacheError> {
if let Some(people) = &item.people {
for person in people {
// Upsert person
let db_person = Person {
id: person.id.clone(),
server_id: item.server_id.clone(),
name: person.name.clone(),
role: person.role.clone(),
overview: None,
primary_image_tag: person.primary_image_tag.clone(),
birth_date: None,
death_date: None,
birth_place: None,
synced_at: Some(Utc::now()),
};
self.db.upsert_person(&db_person).await?;
// Create association
let association = ItemPerson {
item_id: item.id.clone(),
person_id: person.id.clone(),
role_type: person.type_field.clone(), // Actor, Director, etc.
role_name: person.role.clone(), // Character name
sort_order: person.sort_order,
};
self.db.upsert_item_person(&association).await?;
}
}
Ok(())
}
/// Fetch item with all related data (cast, collection, chapters)
pub async fn get_item_full(&self, item_id: &str) -> Result<FullItem, CacheError> {
let item = self.db.get_item(item_id).await?
.ok_or(CacheError::NotFound)?;
let cast = self.db.get_item_people(item_id, Some("Actor")).await?;
let crew = self.db.get_item_people(item_id, None).await?; // All roles
let collections = self.db.get_item_collections(item_id).await?;
let chapters = self.db.get_chapters(item_id).await?;
Ok(FullItem {
item,
cast,
crew,
collections,
chapters,
})
}
/// Stale-while-revalidate: Return cached, refresh in background
pub async fn get_item_swr(&self, item_id: &str) -> Result<Item, CacheError> {
// Try cache first
if let Some(cached) = self.db.get_item(item_id).await? {
// Check if stale (older than 6 hours)
if let Some(synced_at) = cached.synced_at {
let age = Utc::now() - synced_at;
if age.num_hours() < self.sync_config.sync_interval_hours as i64 {
return Ok(cached); // Fresh enough
}
}
// Stale, but return it immediately
let cached_clone = cached.clone();
// Refresh in background
let client = self.jellyfin_client.clone();
let db = self.db.clone();
let item_id = item_id.to_string();
tokio::spawn(async move {
if let Ok(fresh) = client.get_item(&item_id).await {
let _ = db.upsert_item(&fresh).await;
}
});
return Ok(cached_clone);
}
// Not in cache, fetch from server
let fresh = self.jellyfin_client.get_item(item_id).await?;
self.db.upsert_item(&fresh).await?;
Ok(fresh)
}
}
#[derive(Debug, Default)]
pub struct SyncReport {
pub items_fetched: usize,
pub items_synced: usize,
pub images_cached: usize,
pub people_synced: usize,
pub errors: Vec<String>,
}
pub struct FullItem {
pub item: Item,
pub cast: Vec<PersonWithRole>,
pub crew: Vec<PersonWithRole>,
pub collections: Vec<Collection>,
pub chapters: Vec<Chapter>,
}
pub struct PersonWithRole {
pub person: Person,
pub role_type: String, // Actor, Director, etc.
pub role_name: Option<String>, // Character name
}
6. Smart Pre-caching Strategies
6.1 Predictive Pre-caching
pub struct PrecacheEngine {
image_cache: Arc<ImageCacheService>,
metadata_cache: Arc<MetadataCacheService>,
analytics: Arc<AnalyticsService>,
}
impl PrecacheEngine {
/// Pre-cache based on navigation patterns
pub async fn precache_navigation(&self, context: NavigationContext) -> Result<(), CacheError> {
match context {
NavigationContext::LibraryGrid { library_id, visible_items } => {
// 1. Cache visible items (high priority)
self.precache_grid_items(&visible_items, CachePriority::High).await?;
// 2. Predict next page (medium priority)
let next_page = self.predict_next_page(&library_id, &visible_items).await?;
self.precache_grid_items(&next_page, CachePriority::Medium).await?;
},
NavigationContext::DetailView { item_id } => {
// 1. Cache item details (high priority)
self.metadata_cache.get_item_full(&item_id).await?;
// 2. Cache all images for item
self.precache_item_images(&item_id).await?;
// 3. Cache cast profile pictures (medium priority)
self.precache_cast_images(&item_id).await?;
// 4. If series, cache next episode
if let Some(next_ep) = self.get_next_episode(&item_id).await? {
self.precache_item_images(&next_ep.id).await?;
}
},
NavigationContext::Queue { items } => {
// Cache next 5 items in queue
for (index, item) in items.iter().take(5).enumerate() {
let priority = match index {
0 => CachePriority::High,
1..=2 => CachePriority::Medium,
_ => CachePriority::Low,
};
self.precache_item_images(&item.id).await?;
}
},
NavigationContext::Search { query } => {
// No pre-caching for search (unpredictable)
},
}
Ok(())
}
async fn precache_item_images(&self, item_id: &str) -> Result<(), CacheError> {
let item = self.metadata_cache.db.get_item(item_id).await?
.ok_or(CacheError::NotFound)?;
// Cache all image types for this item
let image_types = vec!["Primary", "Backdrop", "Logo", "Thumb"];
for img_type in image_types {
let tag = self.get_image_tag(&item, img_type);
if tag.is_some() {
// Fire and forget
let _ = self.image_cache.get_image(
item_id,
img_type,
tag.as_deref(),
Some(1920),
Some(1080),
).await;
}
}
Ok(())
}
async fn precache_cast_images(&self, item_id: &str) -> Result<(), CacheError> {
let people = self.metadata_cache.db.get_item_people(item_id, Some("Actor")).await?;
for person in people.iter().take(10) { // Top 10 cast
if let Some(tag) = &person.person.primary_image_tag {
let _ = self.image_cache.get_image(
&person.person.id,
"Primary",
Some(tag),
Some(400),
Some(400),
).await;
}
}
Ok(())
}
}
pub enum NavigationContext {
LibraryGrid { library_id: String, visible_items: Vec<String> },
DetailView { item_id: String },
Queue { items: Vec<QueueItem> },
Search { query: String },
}
6.2 Background Cache Warming
pub struct CacheWarmingService {
metadata_cache: Arc<MetadataCacheService>,
image_cache: Arc<ImageCacheService>,
config: WarmingConfig,
}
pub struct WarmingConfig {
pub enabled: bool,
pub warm_on_wifi_only: bool,
pub warm_continue_watching: bool, // Pre-cache items user is likely to watch
pub warm_new_releases: bool, // Pre-cache recently added content
pub warm_favorites: bool, // Pre-cache favorited content
}
impl CacheWarmingService {
/// Run background cache warming (called periodically)
pub async fn warm_cache(&self) -> Result<WarmingReport, CacheError> {
let mut report = WarmingReport::default();
if !self.config.enabled {
return Ok(report);
}
// 1. Continue Watching - User's in-progress items
if self.config.warm_continue_watching {
let in_progress = self.metadata_cache.db
.get_in_progress_items(&self.get_user_id())
.await?;
for item in in_progress.iter().take(20) {
self.warm_item(&item.id).await?;
report.items_warmed += 1;
}
}
// 2. Recently Added - New content
if self.config.warm_new_releases {
let recent = self.metadata_cache.db
.get_recently_added(30) // Last 30 days
.await?;
for item in recent.iter().take(50) {
self.warm_item(&item.id).await?;
report.items_warmed += 1;
}
}
// 3. Favorites
if self.config.warm_favorites {
let favorites = self.metadata_cache.db
.get_favorites(&self.get_user_id())
.await?;
for item in favorites.iter().take(100) {
self.warm_item(&item.id).await?;
report.items_warmed += 1;
}
}
Ok(report)
}
async fn warm_item(&self, item_id: &str) -> Result<(), CacheError> {
// Fetch metadata (stale-while-revalidate)
let _ = self.metadata_cache.get_item_swr(item_id).await?;
// Cache primary image
let item = self.metadata_cache.db.get_item(item_id).await?
.ok_or(CacheError::NotFound)?;
if let Some(tag) = &item.primary_image_tag {
let _ = self.image_cache.get_image(
item_id,
"Primary",
Some(tag),
Some(1080),
Some(1620),
).await;
}
Ok(())
}
}
#[derive(Debug, Default)]
pub struct WarmingReport {
pub items_warmed: usize,
pub images_cached: usize,
}
7. Offline-First Data Flow
sequenceDiagram
participant UI as UI Component
participant Repo as HybridRepository
participant Cache as MetadataCache
participant DB as SQLite
participant API as Jellyfin API
participant ImgCache as ImageCache
participant FS as File System
UI->>Repo: getItem(itemId)
Repo->>Cache: get_item_swr(itemId)
par Immediate Return
Cache->>DB: SELECT * FROM items WHERE id = ?
DB-->>Cache: Cached Item (may be stale)
Cache-->>Repo: Return cached item
Repo-->>UI: Display immediately
and Background Refresh
Cache->>API: GET /Items/{itemId}
API-->>Cache: Fresh item data
Cache->>DB: UPDATE items SET ...
end
UI->>Repo: getImageUrl(itemId, "Primary")
Repo->>ImgCache: get_image(itemId, "Primary")
alt Image Cached
ImgCache->>DB: Check thumbnails table
DB-->>ImgCache: Cached path
ImgCache->>FS: Verify file exists
FS-->>ImgCache: File exists
ImgCache-->>Repo: file:///path/to/image.jpg
Repo-->>UI: Display immediately (<50ms)
else Image Not Cached
ImgCache->>API: GET /Items/{id}/Images/Primary
API-->>ImgCache: Image data
ImgCache->>FS: Save to cache dir
ImgCache->>DB: INSERT INTO thumbnails
ImgCache-->>Repo: file:///path/to/image.jpg
Repo-->>UI: Display (~500ms first time)
end
8. Complete Tauri Commands API
Location: src-tauri/src/commands/cache.rs
// Image Cache Commands
#[tauri::command]
pub async fn cache_get_image(...) -> Result<String, String> { /* ... */ }
#[tauri::command]
pub async fn cache_get_all_images(
item_id: String,
cache_service: State<'_, Arc<ImageCacheService>>,
) -> Result<HashMap<String, String>, String> {
// Returns all cached image types for an item
// { "Primary": "file:///...", "Backdrop": "file:///...", ... }
}
#[tauri::command]
pub async fn cache_precache_batch(...) -> Result<(), String> { /* ... */ }
// Metadata Cache Commands
#[tauri::command]
pub async fn metadata_sync_library(
library_id: String,
deep_sync: bool,
metadata_service: State<'_, Arc<MetadataCacheService>>,
) -> Result<SyncReport, String> { /* ... */ }
#[tauri::command]
pub async fn metadata_get_item_full(
item_id: String,
metadata_service: State<'_, Arc<MetadataCacheService>>,
) -> Result<FullItem, String> {
// Returns item with cast, crew, collections, chapters
}
#[tauri::command]
pub async fn metadata_get_person(
person_id: String,
metadata_service: State<'_, Arc<MetadataCacheService>>,
) -> Result<Person, String> { /* ... */ }
#[tauri::command]
pub async fn metadata_get_person_filmography(
person_id: String,
metadata_service: State<'_, Arc<MetadataCacheService>>,
) -> Result<Vec<Item>, String> {
// Get all items this person appears in
}
#[tauri::command]
pub async fn metadata_search_offline(
query: String,
filters: SearchFilters,
db: State<'_, Arc<Database>>,
) -> Result<SearchResults, String> {
// FTS5 search across cached items
}
// Cache Management Commands
#[tauri::command]
pub async fn cache_get_stats(...) -> Result<CacheStats, String> { /* ... */ }
#[tauri::command]
pub async fn cache_clear_all(
image_cache: State<'_, Arc<ImageCacheService>>,
metadata_cache: State<'_, Arc<MetadataCacheService>>,
) -> Result<(), String> {
image_cache.clear_all().await.map_err(|e| e.to_string())?;
metadata_cache.clear_all().await.map_err(|e| e.to_string())?;
Ok(())
}
#[tauri::command]
pub async fn cache_clear_images_only(...) -> Result<(), String> { /* ... */ }
#[tauri::command]
pub async fn cache_clear_metadata_only(...) -> Result<(), String> { /* ... */ }
// Pre-caching Commands
#[tauri::command]
pub async fn precache_navigation(
context: NavigationContext,
precache_engine: State<'_, Arc<PrecacheEngine>>,
) -> Result<(), String> { /* ... */ }
#[tauri::command]
pub async fn cache_warm_background(
warming_service: State<'_, Arc<CacheWarmingService>>,
) -> Result<WarmingReport, String> { /* ... */ }
Implementation Plan
Phase 1: Core Caching Infrastructure (Week 1)
- ✅ Database schema enhancement (add
last_accessed,file_sizeto thumbnails) - ✅ Create
src-tauri/src/cache/module - ✅ Implement
ImageCacheServicewith basic download and storage - ✅ Add database queries for thumbnails
- ✅ Create Tauri commands:
cache_get_image,cache_is_cached
Testing:
- Unit tests for cache service
- Integration test: Download and retrieve thumbnail
- Verify file system operations
Phase 2: Repository Integration (Week 2)
- ✅ Update
OnlineRepository.getImageUrl()to check cache - ✅ Implement
ImagePreloaderservice - ✅ Add cache checking to VideoPlayer component
- ✅ Wire up precaching in library navigation
Testing:
- E2E test: Navigate to library, verify images load from cache
- Measure load time improvement
Phase 3: LRU Eviction & Optimization (Week 3)
- ✅ Implement
evict_if_needed()with LRU policy - ✅ Add background cache warming (popular content)
- ✅ Implement
cache_precache_batchcommand - ✅ Add cache statistics tracking
Testing:
- Test cache size limit enforcement
- Verify LRU eviction removes oldest items
- Performance benchmarks
Phase 4: Metadata Caching (Week 4)
- ✅ Implement
db_upsert_itemanddb_get_itemcommands - ✅ Create
HybridRepositorywith offline-first strategy - ✅ Add stale-while-revalidate pattern
- ✅ Implement background sync service
Testing:
- Test offline mode with cached metadata
- Verify background refresh works
- Test cache invalidation on etag changes
Performance Impact
Before (Current State)
- Video Player Load: 500-2000ms (network fetch)
- Library Grid Load: 2-5s for 50 items (50 image requests)
- Offline Support: None
After (With Caching)
- Video Player Load: 50-100ms (local file read)
- Library Grid Load: 200-500ms (cached images)
- Offline Support: Full metadata + images available offline
Expected Improvements:
- 10x faster video player initialization
- 5-10x faster library browsing
- Zero loading time on repeat navigation
Storage Estimates
| Content Type | Image Type | Resolution | Size per Image | 1000 Items |
|---|---|---|---|---|
| Movies | Poster | 400x600 | ~80 KB | 80 MB |
| Movies | Backdrop | 1920x1080 | ~200 KB | 200 MB |
| TV Shows | Poster | 400x600 | ~80 KB | 80 MB |
| Albums | Cover | 400x400 | ~60 KB | 60 MB |
Recommended Cache Size: 500 MB (configurable)
- ~6,000 posters or ~2,500 backdrops
- Sufficient for typical library browsing
Cache Invalidation Strategy
-
Image Tag Comparison:
- Jellyfin provides
ImageTagfor each image - Compare tag on each fetch, re-download if changed
- Automatic when user updates poster/backdrop
- Jellyfin provides
-
TTL (Time-to-Live):
- Optional: Images older than 30 days can be re-validated
- Useful for metadata that changes rarely
-
Manual Refresh:
- Settings UI: "Clear Image Cache" button
- Developer option: Force refresh all images
Configuration UI
Location: src/routes/settings/+page.svelte
<!-- Cache Settings Section -->
<div class="space-y-4">
<h3 class="text-lg font-semibold text-white">Cache Settings</h3>
<!-- Cache Size Limit -->
<div class="flex items-center justify-between">
<div>
<p class="text-white">Image Cache Size Limit</p>
<p class="text-sm text-gray-400">Maximum storage for cached images</p>
</div>
<select bind:value={cacheSettings.maxSizeMB} class="...">
<option value={100}>100 MB</option>
<option value={500}>500 MB</option>
<option value={1000}>1 GB</option>
<option value={2000}>2 GB</option>
</select>
</div>
<!-- Cache Stats -->
<div class="bg-gray-800 rounded-lg p-4">
<div class="flex items-center justify-between mb-2">
<span class="text-gray-400">Current Cache Size</span>
<span class="text-white">{cacheStats.totalSizeMB} MB</span>
</div>
<div class="flex items-center justify-between mb-2">
<span class="text-gray-400">Cached Images</span>
<span class="text-white">{cacheStats.totalImages}</span>
</div>
<div class="flex items-center justify-between">
<span class="text-gray-400">Cache Hit Rate</span>
<span class="text-white">{cacheStats.cacheHitRate}%</span>
</div>
</div>
<!-- Clear Cache Button -->
<button onclick={handleClearCache} class="...">
Clear Image Cache
</button>
</div>
Success Metrics
-
Performance:
- Video player title card appears in <100ms
- Library grid renders in <500ms
- Cache hit rate >80% for repeat navigation
-
Storage:
- Cache stays within configured limit
- LRU eviction maintains most-used content
-
User Experience:
- No perceived loading delay for cached content
- Smooth navigation between library views
- Offline browsing works seamlessly
Future Enhancements
-
Progressive Image Loading:
- Show low-quality placeholder immediately
- Replace with high-quality when available
-
Smart Pre-caching:
- Analyze navigation patterns
- Pre-cache likely next views (e.g., continue watching)
-
WebP Support:
- Convert to WebP for 25-35% size reduction
- Requires Jellyfin server support or client-side conversion
-
CDN Integration:
- Support for CDN-hosted images
- Edge caching for improved performance
Related Requirements
- ✅ DR-012: Local database for media metadata cache (Done)
- 🔄 DR-016: Thumbnail caching and sync with server (In Progress)
- 🔄 DR-001: Player state machine - Loading state (Partially Done - UI implemented)
- 🔄 DR-010: Video player UI (Planned)
Questions for Discussion
- Image Format: Should we convert all images to WebP for smaller size?
- Cache Priority: Should video title cards get higher priority than library thumbnails?
- Background Sync: How aggressively should we pre-cache? (WiFi-only option?)
- Offline Mode: Should we pre-download all metadata for offline libraries?
Last Updated: 2026-01-04 Status: Design Complete - Ready for Implementation Next Step: Begin Phase 1 implementation