We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Speed up Phoenix with ETS content caching and TTL expiration
almirsarajcic
Database queries for content-heavy pages become performance bottlenecks fast. Redis adds infrastructure complexity. ETS gives you microsecond lookups built into the BEAM.
defmodule MyApp.ContentCache do
@cache_table :content_cache
@ttl_ms 5 * 60 * 1000 # 5 minutes
def get_or_generate(type, key_data, generator_fn) do
cache_key = build_cache_key(type, key_data)
ensure_cache_table()
case lookup_cache(cache_key) do
{:hit, content} -> content
:miss ->
content = generator_fn.()
store_cache(cache_key, content)
content
end
end
defp lookup_cache(cache_key) do
case :ets.lookup(@cache_table, cache_key) do
[{^cache_key, content, timestamp}] ->
if System.system_time(:millisecond) - timestamp < @ttl_ms do
{:hit, content}
else
:ets.delete(@cache_table, cache_key)
:miss
end
[] -> :miss
end
end
defp store_cache(cache_key, content) do
timestamp = System.system_time(:millisecond)
:ets.insert(@cache_table, {cache_key, content, timestamp})
end
defp ensure_cache_table do
case :ets.info(@cache_table) do
:undefined -> :ets.new(@cache_table, [:named_table, :public, :set])
_info -> :ok
end
end
defp build_cache_key(:article, %{id: id, updated_at: updated_at}) do
{:article, id, timestamp_key(updated_at)}
end
defp timestamp_key(%{year: y, month: m, day: d, hour: h, minute: min}) do
y * 100_000_000 + m * 1_000_000 + d * 10_000 + h * 100 + min
end
def clear_by_type(type) do
case :ets.info(@cache_table) do
:undefined -> :ok
_info ->
:ets.select_delete(@cache_table, [
{{{type, :_, :_}, :_, :_}, [], [true]}
])
end
end
end
Use it in your controller for instant performance wins:
def show(conn, %{"id" => id}) do
article = Blog.get_article!(id)
html_content = MyApp.ContentCache.get_or_generate(:article, article, fn ->
ArticleFormatter.to_html(article)
end)
render(conn, "show.html", content: html_content)
end
The key insight: cache keys include updated_at
timestamps, so content automatically invalidates when data changes. No manual cache busting needed.
Add cache clearing for writes:
def update_article(article, attrs) do
case Repo.update(changeset) do
{:ok, updated_article} ->
MyApp.ContentCache.clear_by_type(:article)
{:ok, updated_article}
error -> error
end
end
ETS tables survive individual process crashes but reset on application restart - perfect for content that rebuilds quickly. For complex invalidation, combine with Phoenix PubSub to clear related caches across your cluster.
Cache Management:
Stale data isn’t a problem with this pattern - cache keys include updated_at
timestamps, so when your data changes, the cache key changes too. Old entries expire via TTL.
For storage limits, add memory monitoring:
def get_cache_stats do
case :ets.info(@cache_table) do
:undefined -> %{size: 0, memory: 0}
info ->
%{
size: info[:size],
memory_words: info[:memory],
memory_mb: info[:memory] * :erlang.system_info(:wordsize) / (1024 * 1024)
}
end
end
def cleanup_if_needed do
stats = get_cache_stats()
if stats.memory_mb > 100 do # 100MB limit
clear_expired_entries()
end
end
defp clear_expired_entries do
now = System.system_time(:millisecond)
:ets.select_delete(@cache_table, [
{{:_, :_, :"$1"}, [{:>, {:-, now, :"$1"}, @ttl_ms}], [true]}
])
end
Pro tip: For high-traffic apps, run cleanup in a GenServer every few minutes. ETS memory usage shows up in :observer.start()
under the owning process.
copied to clipboard