All posts

Fix Kubernetes Pod Crash in C#

Diagnose and fix CrashLoopBackOff errors in Kubernetes pods running C# .NET applications with practical debugging steps.

Kubernetes Pod CrashLoopBackOff for C# Apps

When your C# pod enters CrashLoopBackOff, Kubernetes is restarting it because the process keeps exiting with a non-zero code. Let's figure out why.

Step 1: Check the Logs

kubectl logs <pod-name> --previous

Common C# crash reasons:

  • Unhandled exceptions during startup
  • Missing configuration or connection strings
  • Port binding conflicts
  • Insufficient memory (OOMKilled)

Step 2: Common Fixes

Missing appsettings.json in the container:

# Make sure config is copied
COPY appsettings.json .
COPY appsettings.Production.json .

Connection string pointing to localhost instead of the Kubernetes service:

{
  "ConnectionStrings": {
    "Default": "Server=postgres-service;Port=5432;Database=mydb;User Id=admin;Password=secret;"
  }
}

Health check endpoint missing, causing liveness probe failures:

var builder = WebApplication.CreateBuilder(args);
builder.Services.AddHealthChecks();

var app = builder.Build();
app.MapHealthChecks("/healthz");
app.Run();
# deployment.yaml
livenessProbe:
  httpGet:
    path: /healthz
    port: 8080
  initialDelaySeconds: 15
  periodSeconds: 10

Step 3: Memory Limits

C# apps under .NET can consume significant memory. If you see OOMKilled in kubectl describe pod, increase your limits:

resources:
  limits:
    memory: "512Mi"
  requests:
    memory: "256Mi"

Bugsly captures unhandled .NET exceptions before the process exits, giving you the full exception chain even when kubectl logs has already rotated.

Try Bugsly Free

AI-powered error tracking that explains your bugs. Set up in 2 minutes, free forever for small projects.

Get Started Free