⏳
Loading cheatsheet...
CI/CD pipelines, Terraform, cybersecurity fundamentals, network security, cloud security, DevOps tools, API security, and security best practices.
| Concept | Definition | Goal |
|---|---|---|
| CI (Continuous Integration) | Developers merge code into a shared repo frequently; automated builds and tests run on every commit | Catch bugs early, ensure code quality |
| CD (Continuous Delivery) | Code changes are automatically prepared for release to production; a human approves the final deploy | Release-ready artifacts at any time |
| CD (Continuous Deployment) | Every change that passes all stages is released to production automatically, with zero human intervention | Fully automated release pipeline |
| Tool | Type | Hosted / Self-hosted | Best For |
|---|---|---|---|
| GitHub Actions | CI/CD + Automation | Both | GitHub-native workflows, huge marketplace |
| GitLab CI | CI/CD + Security | Both | End-to-end DevOps platform, Auto DevOps |
| Jenkins | CI/CD | Self-hosted | Highly customizable, large plugin ecosystem |
| CircleCI | CI/CD | Both | Fast parallelism, Docker-first |
| ArgoCD | GitOps CD | Self-hosted | Kubernetes-native, declarative deployments |
# ── GitHub Actions CI/CD Pipeline ──
name: CI/CD Pipeline
on:
push:
branches: [main, develop]
pull_request:
branches: [main]
env:
NODE_VERSION: "20"
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}
jobs:
test:
runs-on: ubuntu-latest
strategy:
matrix:
node-version: [18, 20, 22]
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: ${{ matrix.node-version }}
cache: npm
- run: npm ci
- run: npm run lint
- run: npm run test -- --coverage
- uses: actions/upload-artifact@v4
with:
name: coverage-report
path: coverage/
build-and-push:
needs: test
runs-on: ubuntu-latest
if: github.ref == 'refs/heads/main'
permissions:
contents: read
packages: write
steps:
- uses: actions/checkout@v4
- uses: docker/setup-buildx-action@v3
- uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- uses: docker/build-push-action@v5
with:
context: .
push: true
tags: |
${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:latest
${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ github.sha }}
cache-from: type=gha
cache-to: type=gha,mode=max| Stage | Description | Tools |
|---|---|---|
| Source | Code checkout and trigger | Git webhooks, branch rules |
| Build | Compile code, resolve dependencies | npm, Maven, Gradle, pip, Docker build |
| Test | Unit, integration, E2E tests | Jest, PyTest, Cypress, Playwright |
| Security Scan | SAST, DAST, dependency audit | SonarQube, Snyk, Trivy, OWASP ZAP |
| Artifact | Package and store build output | Docker images, JARs, npm packages |
| Deploy Staging | Deploy to staging environment | Kubectl, Helm, ArgoCD, Terraform |
| Integration Test | End-to-end tests on staging | Postman, k6, Playwright |
| Deploy Prod | Release to production | Blue-green, Canary, Rolling |
| Monitor | Observe production health | Prometheus, Grafana, Datadog |
| Strategy | How It Works | Rollback | Downtime | Best For |
|---|---|---|---|---|
| Rolling | Gradually replace old pods with new ones | Slow (reverse rolling) | None | Standard Kubernetes deployments |
| Blue-Green | Maintain two identical environments; switch traffic atomically | Instant (switch back) | None | Critical apps, low risk tolerance |
| Canary | Route small % of traffic to new version; monitor; increase gradually | Gradual traffic shift | None | High-traffic apps, A/B testing |
| Feature Flags | Decouple deploy from release using toggle flags in code | Toggle off feature | None | Continuous delivery, gradual feature rollout |
# ── GitLab CI Pipeline ──
stages:
- test
- build
- deploy-staging
- deploy-prod
variables:
DOCKER_TLS_CERTDIR: "/certs"
test:
stage: test
image: node:20-alpine
script:
- npm ci
- npm run lint
- npm run test:ci
coverage: '/Lines\s*:\s*(\d+\.?\d*)%/'
build:
stage: build
image: docker:24
services: # docker-in-docker service
script:
- docker build -t $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA .
- docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
only: [main, develop]
deploy-staging:
stage: deploy-staging
script:
- helm upgrade --install myapp ./helm --set image.tag=$CI_COMMIT_SHA
environment: staging
only: [develop]
deploy-prod:
stage: deploy-prod
script:
- helm upgrade --install myapp ./helm --set image.tag=$CI_COMMIT_SHA
environment: production
when: manual
only: [main]cache-from: type=gha with GitHub Actions to avoid rebuilding unchanged layers. Tag images with both latest and the Git SHA for traceability.# ── HCL Syntax Basics ──
# Blocks, arguments, expressions
resource "aws_instance" "web" {
ami = var.ami_id
instance_type = var.instance_type
tags = {
Name = "web-server"
Environment = var.environment
ManagedBy = "terraform"
}
}
# ── Variables ──
variable "ami_id" {
description = "AMI ID for the EC2 instance"
type = string
default = "ami-0c55b159cbfafe1f0"
}
variable "instance_type" {
type = string
default = "t3.micro"
}
variable "environment" {
type = string
default = "dev"
}
variable "allowed_ports" {
description = "List of allowed ingress ports"
type = list(number)
default = [80, 443]
}
# ── Outputs ──
output "instance_public_ip" {
description = "Public IP of the EC2 instance"
value = aws_instance.web.public_ip
}
output "security_group_id" {
value = aws_security_group.web.id
}# ── Data Sources ──
data "aws_availability_zones" "available" {
state = "available"
}
data "aws_ami" "ubuntu" {
most_recent = true
owners = ["099720109477"] # Canonical
filter {
name = "name"
values = ["ubuntu/images/hvm-ssd/ubuntu-*-amd64-server-*"]
}
filter {
name = "virtualization-type"
values = ["hvm"]
}
}
# ── VPC with subnets ──
resource "aws_vpc" "main" {
cidr_block = "10.0.0.0/16"
enable_dns_hostnames = true
enable_dns_support = true
tags = { Name = "main-vpc" }
}
resource "aws_subnet" "public" {
count = length(data.aws_availability_zones.available.names)
vpc_id = aws_vpc.main.id
cidr_block = "10.0.${count.index}.0/24"
availability_zone = data.aws_availability_zones.available.names[count.index]
map_public_ip_on_launch = true
tags = { Name = "public-subnet-${count.index}" }
}
# ── Security Group ──
resource "aws_security_group" "web" {
name = "web-sg"
description = "Allow HTTP and HTTPS"
vpc_id = aws_vpc.main.id
dynamic "ingress" {
for_each = var.allowed_ports
content {
from_port = ingress.value
to_port = ingress.value
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}# ── Reusable Module ──
variable "instance_type" { type = string }
variable "ami_id" { type = string }
variable "subnet_id" { type = string }
variable "sg_ids" { type = list(string) }
resource "aws_instance" "server" {
ami = var.ami_id
instance_type = var.instance_type
subnet_id = var.subnet_id
vpc_security_group_ids = var.sg_ids
tags = { Name = "web-server" }
}
output "public_ip" { value = aws_instance.server.public_ip }# ── Essential Terraform Commands ──
terraform init # Initialize (download providers, modules)
terraform plan -out=plan.tf # Preview changes, save plan
terraform apply plan.tf # Apply saved plan
terraform apply -auto-approve # Apply without confirmation (CI/CD)
terraform destroy # Remove all resources
terraform fmt -recursive # Format all .tf files
terraform validate # Syntax and logical validation
terraform lint # Static analysis (tflint)
terraform graph # Visualize dependency graph (dot format)
terraform workspace new prod # Create workspace
terraform workspace select prod
terraform output -json # Get outputs as JSON# ── Remote State Backend (AWS S3) ──
terraform {
backend "s3" {
bucket = "my-terraform-state"
key = "infra/terraform.tfstate"
region = "us-east-1"
encrypt = true
dynamodb_table = "terraform-locks"
}
}
# ── Provider Constraints ──
terraform {
required_version = ">= 1.7.0"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
google = {
source = "hashicorp/google"
version = "~> 5.0"
}
}
}sensitive = true on variables and outputs that contain secrets.| Principle | Description | Example |
|---|---|---|
| Confidentiality | Ensure data is accessible only to authorized parties | Encryption, access controls, classification |
| Integrity | Protect data from unauthorized modification or deletion | Hashing, digital signatures, checksums |
| Availability | Ensure systems and data are accessible when needed | Redundancy, backups, DDoS protection |
| Threat | Type | Description | Mitigation |
|---|---|---|---|
| Malware | Software | Malicious software (viruses, worms, trojans, spyware) | Antivirus, EDR, user training |
| Phishing | Social Eng. | Fraudulent messages to steal credentials or data | Email filters, MFA, security awareness |
| Ransomware | Malware | Encrypts files and demands payment for decryption | Backups, EDR, network segmentation |
| DDoS | Network | Overwhelms services with massive traffic | CDN, WAF, rate limiting, Cloudflare |
| SQL Injection | Injection | Malicious SQL queries through input fields | Parameterized queries, ORM, input validation |
| XSS | Injection | Inject client-side scripts into web pages | Output encoding, CSP headers, sanitization |
| CSRF | Web | Forces users to execute unwanted actions | CSRF tokens, SameSite cookies |
| MITM | Network | Intercepts communication between two parties | HTTPS/TLS, certificate pinning, VPN |
| # | Risk | Description |
|---|---|---|
| 01 | Broken Access Control | Users can act outside intended permissions |
| 02 | Cryptographic Failures | Weak or missing encryption of sensitive data |
| 03 | Injection | Untrusted data sent to interpreter (SQL, NoSQL, OS, LDAP) |
| 04 | Insecure Design | Missing or ineffective security controls in design |
| 05 | Security Misconfiguration | Default configs, open cloud storage, misconfigured headers |
| 06 | Vulnerable Components | Using libraries/frameworks with known vulnerabilities |
| 07 | Auth Failures | Weak passwords, session management, credential stuffing |
| 08 | Software/Data Integrity | Insecure CI/CD pipelines, auto-updates without verification |
| 09 | Logging/Monitoring Failures | Insufficient logging, detection, and response |
| 10 | Server-Side Request Forgery | Server fetches attacker-controlled URLs (SSRF) |
| Type | Keys | Speed | Use Case | Algorithms |
|---|---|---|---|---|
| Symmetric | One shared key | Fast | Bulk data encryption, disk encryption | AES-256, ChaCha20 |
| Asymmetric | Public + Private key pair | Slow | Key exchange, digital signatures, TLS | RSA-4096, Ed25519, ECDSA |
| Algorithm | Output Size | Secure | Use Case |
|---|---|---|---|
| MD5 | 128-bit | No | Legacy checksums, file dedup only |
| SHA-1 | 160-bit | No | Deprecated; collision attacks exist |
| SHA-256 | 256-bit | Yes | General purpose hashing, certificates |
| SHA-512 | 512-bit | Yes | Higher security margin, FIPS approved |
| bcrypt | 192-bit | Yes | Password hashing (built-in salt, cost factor) |
| Argon2id | Variable | Yes | Winner of PHC; best for password hashing |
| HMAC-SHA256 | 256-bit | Yes | Message authentication codes (MACs) |
# ── HTTPS/TLS Basics ──
# TLS 1.3 is the current standard (TLS 1.0/1.1 deprecated)
# TLS Handshake (simplified):
# 1. Client Hello → supported ciphers, TLS version
# 2. Server Hello → chosen cipher, certificate
# 3. Key Exchange → ECDHE (ephemeral Diffie-Hellman)
# 4. Change Cipher → encrypted communication begins
# Check TLS certificate
openssl s_client -connect example.com:443 -servername example.com
# Generate self-signed cert (dev only)
openssl req -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem \
-days 365 -nodes -subj "/CN=localhost"
# Check certificate expiry
echo | openssl s_client -connect example.com:443 2>/dev/null | \
openssl x509 -noout -dates
# TLS 1.3 cipher suites (modern):
# TLS_AES_256_GCM_SHA384
# TLS_CHACHA20_POLY1305_SHA256
# TLS_AES_128_GCM_SHA256| Type | Layer | Scope | Examples |
|---|---|---|---|
| Network Firewall | L3/L4 | Perimeter defense, packet filtering | iptables, nftables, pf, AWS NACL |
| Web Application Firewall | L7 | HTTP/HTTPS traffic inspection | AWS WAF, Cloudflare WAF, ModSecurity |
| Host Firewall | L3/L4 | Individual server protection | UFW, firewalld, Windows Firewall |
| Next-Gen Firewall | L3-L7 | Deep inspection, IDS/IPS, app awareness | Palo Alto, Fortinet, Sophos |
| Type | Protocol | Use Case | Key Feature |
|---|---|---|---|
| Remote Access VPN | OpenVPN, WireGuard | Individual users connecting to corporate network | Client software required |
| Site-to-Site VPN | IPsec, WireGuard | Connecting two office networks | Routers negotiate tunnel |
| SSL VPN | TLS (HTTPS) | Browser-based access, no client install | Portal-based, portable |
| Zero Trust VPN | WireGuard + identity | Access based on identity, not network | BeyondCorp model |
| System | Full Name | Action | Placement |
|---|---|---|---|
| IDS | Intrusion Detection System | Detect only (alerts); passive | Span port, tap, mirrored traffic |
| IPS | Intrusion Prevention System | Detect + block; active | Inline (bumps in the wire) |
| WAF | Web Application Firewall | Filter HTTP/HTTPS traffic | Reverse proxy before app server |
# ── Port Scanning with Nmap ──
# Install: sudo apt install nmap
# Quick scan (top 1000 ports)
nmap example.com
# Scan specific ports
nmap -p 22,80,443,8080 example.com
# Scan all 65535 ports
nmap -p- example.com
# Service version detection
nmap -sV example.com
# OS detection
nmap -O example.com
# SYN scan (stealth, requires root)
sudo nmap -sS example.com
# UDP scan
sudo nmap -sU --top-ports 100 example.com
# Aggressive scan (OS + version + scripts + traceroute)
sudo nmap -A example.com
# Scan a subnet
nmap 192.168.1.0/24
# Nmap scripting engine (NSE)
nmap --script vuln example.com # vulnerability scan
nmap --script http-headers example.com # HTTP headers
nmap --script ssl-enum-ciphers -p 443 example.com # TLS ciphers| # | Principle | Description |
|---|---|---|
| 1 | Never Trust, Always Verify | Authenticate and authorize every request regardless of origin |
| 2 | Least Privilege Access | Grant minimum permissions required for each user/service |
| 3 | Assume Breach | Design as if the network is already compromised |
| 4 | Micro-segmentation | Isolate workloads and limit lateral movement |
| 5 | Continuous Monitoring | Real-time analytics and anomaly detection on all traffic |
| 6 | Device Trust | Verify device health and compliance before granting access |
| Layer | Cloud Provider | Customer |
|---|---|---|
| Physical Infrastructure | Data centers, hardware, networking | — |
| Platform | Hypervisor, host OS, managed services | — |
| Operating System | — | Guest OS patching, configuration |
| Network | VPC, firewall rules | Security groups, NACLs, encryption |
| Data | Encryption at rest options | Data classification, key management |
| Identity | IAM service, MFA option | Users, roles, policies, access review |
| Application | SDKs, APIs | Secure coding, input validation, testing |
| Compliance | Compliance certifications (SOC2, ISO) | Workload compliance, audit trails |
// ── AWS IAM Best Practices ──
// 1. Use IAM roles instead of long-lived access keys
// 2. Apply least privilege policies
// 3. Enable MFA for all human users
// 4. Use IAM Identity Center (SSO) for federated access
// 5. Rotate credentials regularly
// Example: Least privilege S3 policy
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::my-bucket/uploads/${aws:username}/*"
}
]
}
// S3 Bucket Policy - Block public access
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "BlockPublicAccess",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::my-private-bucket/*",
"Condition": {
"Bool": {
"aws:SecureTransport": "false"
}
}
}
]
}| Framework | Scope | Key Requirements |
|---|---|---|
| SOC 2 Type II | Service organizations | Security, availability, processing integrity, confidentiality, privacy |
| ISO 27001 | Any organization | ISMS: risk assessment, security controls, continuous improvement |
| GDPR | EU personal data | Data consent, right to erasure, breach notification, DPO |
| HIPAA | Healthcare (US) | PHI protection, access controls, audit trails, encryption |
| PCI DSS | Payment card data | Network segmentation, encryption, access control, pen testing |
| FedRAMP | US federal agencies | Security authorization for cloud service providers |
# ── Cloud Security Audit Checklist ──
# AWS Quick Checks
aws iam list-users --query 'Users[?PasswordLastUsed!=null]'
aws s3api list-buckets --query 'Buckets[].Name'
aws ec2 describe-security-groups --query 'SecurityGroups[*].IpPermissions'
# Check for public S3 buckets
aws s3api list-buckets --query 'Buckets[*].Name' | \
xargs -I {} aws s3api get-bucket-policy-status --bucket {}
# AWS CLI security hardening
aws iam get-account-summary
aws guardduty list-detectors
aws securityhub get-enabled-standards
# GCP Quick Checks
gcloud iam service-accounts list
gcloud compute firewalls list --filter="direction:INGRESS"
gcloud sql instances list
# Azure Quick Checks
az ad user list
az network nsg list
az storage account list| Feature | Kubernetes | Docker Swarm | Nomad |
|---|---|---|---|
| Scale | Large (1000s of nodes) | Small-Medium | Medium-Large |
| Learning Curve | Steep | Low | Moderate |
| Self-Healing | Yes (Deployments, DaemonSets) | Limited | Yes (task restart) |
| Service Discovery | CoreDNS built-in | Docker DNS overlay | Consul integration |
| Load Balancing | Ingress, Services, MetalLB | VIP + DNSRR | Consul Connect |
| Storage | PV/PVC, CSI drivers | Volumes | CSI, host, Docker volumes |
| Ecosystem | Massive (Helm, Istio, Argo) | Small | Growing (Consul ecosystem) |
| Best For | Complex, enterprise workloads | Simple apps, small teams | Mixed workloads, simplicity |
| Feature | Istio | Linkerd |
|---|---|---|
| Architecture | Sidecar (Envoy) + Control Plane | Sidecar (linkerd2-proxy) + Control Plane |
| Complexity | High (many CRDs, rich features) | Low (lightweight, focused) |
| mTLS | Yes (auto-rotate certificates) | Yes (auto-rotate certificates) |
| Traffic Mgmt | Advanced (retries, fault injection, mirroring) | Basic (retries, timeouts) |
| Observability | Kiali, Jaeger, Prometheus, Grafana | Prometheus, Grafana, Jaeger (via tap) |
| Resource Overhead | Higher (~100MB per sidecar) | Lower (~25MB per sidecar) |
| Tool | Category | Purpose | Protocol |
|---|---|---|---|
| Prometheus | Metrics | Time-series metrics collection and alerting | Pull (HTTP), PromQL |
| Grafana | Visualization | Dashboards for metrics, logs, traces | Prometheus, Loki, Tempo, Elasticsearch |
| ELK Stack | Logs | Log aggregation (Elasticsearch, Logstash, Kibana) | Beats, Logstash, API |
| Loki | Logs | Lightweight log aggregation (label-indexed) | Push (Promtail, Grafana Agent) |
| Jaeger | Traces | Distributed tracing for microservices | OpenTelemetry, Jaeger UI |
| Tempo | Traces | Scalable trace storage (Grafana native) | OpenTelemetry, OTLP |
| Alertmanager | Alerting | Deduplication, grouping, routing of alerts | Webhook, Slack, PagerDuty |
# ── Prometheus Alerting Rules ──
groups:
- name: node-alerts
rules:
- alert: HighCPUUsage
expr: 100 - (avg by (instance) (rate(node_cpu_seconds_total{mode="idle"}[5m])) * 100) > 85
for: 5m
labels:
severity: warning
annotations:
summary: "High CPU on {{ $labels.instance }}"
description: "CPU usage is {{ $value }}% for 5 minutes"
- alert: HighMemoryUsage
expr: (1 - (node_memory_MemAvailable_bytes / node_memory_MemTotal_bytes)) * 100 > 90
for: 5m
labels:
severity: critical
annotations:
summary: "High memory on {{ $labels.instance }}"
- alert: PodCrashLooping
expr: rate(kube_pod_container_status_restarts_total[15m]) * 60 * 5 > 0
for: 15m
labels:
severity: critical
annotations:
summary: "Pod {{ $labels.namespace }}/{{ $labels.pod }} is crash looping"| Concept | Question | Methods | Example |
|---|---|---|---|
| Authentication | Who are you? | Password, MFA, OAuth, API keys, certificates | Login with username/password |
| Authorization | What can you do? | RBAC, ABAC, scopes, policies | User can edit but not delete |
// ── JWT Structure & Best Practices ──
// JWT = Header.Payload.Signature (three Base64URL parts separated by dots)
// HEADER (algorithm + token type)
// {"alg": "RS256", "typ": "JWT"}
// PAYLOAD (claims - never put secrets here!)
// {
// "sub": "user-123", // Subject (who)
// "iss": "myapp.com", // Issuer
// "aud": "myapp-api", // Audience
// "exp": 1700000000, // Expiration
// "iat": 1699996400, // Issued at
// "role": "admin", // Custom claim
// "jti": "unique-id" // JWT ID (for revocation)
// }
// Best practices:
// - Use RS256 (asymmetric) or ES256 for signing
// - Set short expiration (15 min for access tokens)
// - Use refresh tokens with rotation
// - Validate all claims: sub, iss, aud, exp
// - Store tokens in httpOnly, Secure, SameSite cookies
// - Never store tokens in localStorage
// - Include jti for revocation tracking# ── OAuth 2.0 Authorization Code Flow ──
# Best for: Server-side apps with backend
# Step 1: Redirect user to authorization endpoint
GET /authorize?
response_type=code&
client_id=YOUR_CLIENT_ID&
redirect_uri=https://yourapp.com/callback&
scope=openid profile email&
state=RANDOM_CSRF_TOKEN
# Step 2: User authenticates and authorizes
# Provider redirects back with authorization code
# https://yourapp.com/callback?code=AUTH_CODE&state=RANDOM_CSRF_TOKEN
# Step 3: Exchange code for tokens (server-to-server)
POST /oauth/token
grant_type=authorization_code&
code=AUTH_CODE&
client_id=YOUR_CLIENT_ID&
client_secret=YOUR_CLIENT_SECRET&
redirect_uri=https://yourapp.com/callback
# Step 4: Provider returns access + refresh tokens
# {"access_token": "...", "refresh_token": "...", "expires_in": 3600}
# OAuth 2.0 Grants:
# - Authorization Code: Server-side apps (most secure)
# - PKCE: Public clients (SPA, mobile) - adds code_challenge
# - Client Credentials: Machine-to-machine (no user)
# - Device Code: CLI tools, IoT devices| # | Risk | Description |
|---|---|---|
| 01 | BOLA (IDOR) | Broken Object Level Authorization - access other users resources |
| 02 | Broken Auth | Authentication weaknesses in API endpoints |
| 03 | BOLA at Scale | Excessive data exposure through mass assignment |
| 04 | Rate Limiting | Lack of rate limiting enables brute force and DoS |
| 05 | BFLA | Broken Function Level Authorization (admin endpoints) |
| 06 | Mass Assignment | Binding client data to internal model fields |
| 07 | SSRF | Server-side requests to attacker-controlled URLs |
| 08 | Injection | SQL, NoSQL, command injection via API parameters |
| 09 | Improper Inv. Handling | Excessive data returned, lack of pagination/filtering |
| 10 | SSRF via BOLA | SSRF combined with broken object authorization |
// ── Rate Limiting Example (Express.js) ──
import rateLimit from 'express-rate-limit';
const apiLimiter = rateLimit({
windowMs: 15 * 60 * 1000, // 15 minutes
max: 100, // 100 requests per window
standardHeaders: true, // Return rate limit info in headers
legacyHeaders: false,
message: {
error: 'Too many requests',
retryAfter: '900' // seconds
}
});
app.use('/api/', apiLimiter);
// ── CORS Configuration ──
import cors from 'cors';
// Never use cors() with no options in production
app.use(cors({
origin: ['https://myapp.com', 'https://admin.myapp.com'],
methods: ['GET', 'POST', 'PUT', 'DELETE'],
allowedHeaders: ['Content-Type', 'Authorization'],
credentials: true,
maxAge: 86400 // preflight cache: 24 hours
}));
// ── Input Validation with Zod ──
import { z } from 'zod';
const createUserSchema = z.object({
email: z.string().email().max(255),
name: z.string().min(1).max(100).regex(/^[a-zA-Z ]+$/),
role: z.enum(['user', 'editor', 'admin']),
}).strict(); // reject extra fields (prevent mass assignment)
app.post('/api/users', (req, res) => {
const result = createUserSchema.safeParse(req.body);
if (!result.success) {
return res.status(400).json({ errors: result.error.issues });
}
// result.data is validated and typed
});Access-Control-Allow-Origin: * when credentials are enabled. Always whitelist specific origins. The browser will reject responses that set credentials: include with a wildcard origin.| Requirement | NIST SP 800-63B Guideline | Traditional |
|---|---|---|
| Minimum Length | 8 characters (15+ for important accounts) | 8+ characters |
| Complexity | Check against breached passwords (HaveIBeenPwned) | Upper + lower + digit + special |
| Expiration | No forced expiration unless compromised | Every 90 days |
| Storage | bcrypt or Argon2id with cost factor 12+ | MD5/SHA-1 (insecure!) |
| MFA | Required for sensitive operations | Optional or not enforced |
| Lockout | Rate limit attempts, not lockout | Lock after 5 attempts |
| Type | Security Level | Example | Phishing Resistant |
|---|---|---|---|
| SMS OTP | Low | SMS code to phone | No (SIM swapping) |
| TOTP | Medium | Google Authenticator, Authy | No (real-time phishing) |
| Push Notification | Medium | Microsoft Authenticator, Duo | No (MFA fatigue attacks) |
| Hardware Key (FIDO2) | High | YubiKey, Titan Key | Yes (public key crypto) |
| Platform Authenticator | High | Touch ID, Windows Hello, Face ID | Yes (Passkeys) |
// ── Secure Coding Practices ──
// 1. NEVER trust user input - always validate and sanitize
import { z } from 'zod';
const emailSchema = z.string().email().max(255).trim();
// 2. Parameterized queries (prevent SQL injection)
// BAD: db.query("SELECT * FROM users WHERE id = " + userId)
// GOOD:
import { Pool } from 'pg';
const pool = new Pool();
const result = await pool.query(
'SELECT * FROM users WHERE id = $1 AND active = $2',
[userId, true]
);
// 3. Use bcrypt for password hashing
import bcrypt from 'bcryptjs';
const saltRounds = 12;
const hash = await bcrypt.hash(password, saltRounds);
const match = await bcrypt.compare(inputPassword, hash);
// 4. Generate secure random tokens
import crypto from 'crypto';
const resetToken = crypto.randomBytes(32).toString('hex');
const csrfToken = crypto.randomBytes(32).toString('base64url');
// 5. Set security headers (helmet.js)
import helmet from 'helmet';
app.use(helmet()); // sets 15+ security headers automatically
// Including: Content-Security-Policy, X-Frame-Options,
// X-Content-Type-Options, Strict-Transport-Security, etc.
// 6. Prevent NoSQL injection (Mongoose schema validation)
import mongoose from 'mongoose';
const userSchema = new mongoose.Schema({
email: { type: String, required: true, unique: true },
role: { type: String, enum: ['user', 'admin'], default: 'user' },
});
// Mongoose casts types, preventing operator injection| Tool | Provider | Key Features |
|---|---|---|
| HashiCorp Vault | Self-hosted / HCP | Dynamic secrets, PKI, encryption-as-service, transit engine |
| AWS Secrets Manager | AWS | Automatic rotation, RDS integration, Cross-account access |
| AWS SSM Parameter Store | AWS | Hierarchical params, KMS encryption, free tier available |
| GCP Secret Manager | GCP | Automatic replication, versioning, IAM integration |
| Azure Key Vault | Azure | Keys, secrets, certificates; HSM-backed option |
| Doppler | Multi-cloud | Universal env management, audit logs, sync to infra |
| Infisical | Open source | Self-hosted option, E2E encryption, native integrations |
# ── HashiCorp Vault Quick Start ──
# Start dev server (NEVER use in production)
vault server -dev &
# Enable key-value secrets engine
vault secrets enable -path=secret kv-v2
# Write a secret
vault kv put secret/myapp/database username=admin password=supersecret123
# Read a secret
vault kv get secret/myapp/database
# Enable AppRole authentication (for machines/services)
vault auth enable approle
# Create a role with policy
vault policy write myapp-policy - <<EOF
path "secret/data/myapp/*" {
capabilities = ["read"]
}
EOF
vault write auth/approle/role/myapp token_policies="myapp-policy" token_ttl=1h token_max_ttl=4h
# Get role_id and secret_id
vault read auth/approle/role/myapp/role-id
vault write -f auth/approle/role/myapp/secret-id# ── Dependency Scanning & SAST ──
# npm audit (Node.js)
npm audit # check for vulnerabilities
npm audit fix # auto-fix vulnerabilities
npm audit --production # check production deps only
# Snyk (multi-language)
snyk test # scan current project
snyk monitor # continuous monitoring
snyk container test myapp:latest # scan Docker image
# Trivy (comprehensive scanner)
trivy fs ./ # scan filesystem
trivy image myapp:latest # scan container image
trivy repo https://github.com/user/repo # scan git repo
# SonarQube (SAST)
sonar-scanner \
-Dsonar.projectKey=myproject \
-Dsonar.sources=./src \
-Dsonar.host.url=http://localhost:9000
# OWASP ZAP (DAST - dynamic testing)
docker run -t owasp/zap2docker-stable zap-cli quick-scan \
http://myapp:3000| Phase | Description | Tools |
|---|---|---|
| Reconnaissance | Gather info about the target (OSINT, DNS, ports) | Nmap, Shodan, Recon-ng, theHarvester |
| Scanning | Identify vulnerabilities and open services | Nessus, OpenVAS, Nmap scripts |
| Exploitation | Attempt to exploit found vulnerabilities | Metasploit, Burp Suite, SQLmap |
| Post-Exploitation | Maintain access, escalate privileges, pivot | Mimikatz, BloodHound, LinPEAS |
| Reporting | Document findings with severity and remediation | Dradis, CherryTree, custom templates |
| Platform | Scope | Notable Programs |
|---|---|---|
| HackerOne | Public + Private | Google, Uber, GitHub, US DoD |
| Bugcrowd | Public + Private | Mastercard, Tesla, Pinterest |
| Intigriti | Europe-focused | European banks, tech companies |
| Synack | Invitation-only | High-profile, curated researchers |
| YesWeHack | EU-based | French government, EU enterprises |
| Phase | Actions | Key Output |
|---|---|---|
| 1. Preparation | Policies, tools, training, playbooks, contacts | IR plan, team roster, tool access |
| 2. Detection | Alerts, SIEM, endpoint detection, user reports | Incident ticket with severity |
| 3. Containment | Isolate affected systems, preserve evidence | Short-term and long-term containment |
| 4. Eradication | Remove malware, patch vulnerabilities, fix root cause | Clean systems, patched services |
| 5. Recovery | Restore from backups, monitor, validate | Systems back online, monitoring |
| 6. Lessons Learned | Post-mortem review, update playbooks, improve | Post-mortem report, action items |
# ── Security Compliance Checklist ──
## Authentication & Access
- [ ] MFA enabled for all user accounts
- [ ] Service accounts use IAM roles (not long-lived keys)
- [ ] Password policy: min 12 chars, check breached passwords
- [ ] Session timeout: 15 min idle, 8 hour absolute
- [ ] Access reviews conducted quarterly
## Network & Infrastructure
- [ ] All traffic encrypted in transit (TLS 1.3)
- [ ] Database ports not exposed to internet
- [ ] VPC/subnet segmentation implemented
- [ ] WAF enabled on all public endpoints
- [ ] VPN required for administrative access
## Application Security
- [ ] SAST scans in CI/CD pipeline
- [ ] Dependency scanning (npm audit, Trivy, Snyk)
- [ ] DAST scans on staging before release
- [ ] Input validation on all API endpoints
- [ ] Security headers (CSP, HSTS, X-Frame-Options)
- [ ] Secrets not committed to source control
## Data Protection
- [ ] Encryption at rest for all sensitive data
- [ ] PII data classified and cataloged
- [ ] Backups encrypted and tested regularly
- [ ] Data retention policies documented
- [ ] Right-to-erasure capability (GDPR)
## Monitoring & Response
- [ ] Centralized logging (ELK, Loki)
- [ ] SIEM or Security Hub configured
- [ ] Alerting for critical security events
- [ ] Incident response plan tested (tabletop exercise)
- [ ] Post-mortem process defined