微服务部署
概述
微服务部署是Kubernetes实战的核心内容,本章将深入探讨如何在K8S中部署和管理微服务应用,包括多服务部署、服务间通信、配置管理、服务发现等关键主题。
核心概念
微服务部署特点
- 独立部署:每个服务可以独立部署和更新
- 服务发现:服务自动注册和发现
- 负载均衡:自动负载均衡请求
- 配置管理:集中化配置管理
- 服务治理:流量控制、熔断降级
服务间通信模式
- 同步通信:REST API、gRPC
- 异步通信:消息队列、事件总线
- 服务网格:Istio、Linkerd
- API网关:统一入口、路由分发
配置管理策略
- ConfigMap:配置文件、环境变量
- Secret:敏感信息加密存储
- 配置中心:Apollo、Nacos、Consul
- 热更新:配置变更自动生效
多服务部署
服务架构设计
┌─────────────────────────────────────────────────────────┐
│ 客户端请求 │
└────────────────────┬────────────────────────────────────┘
│
▼
┌─────────────────┐
│ API Gateway │
│ (Nginx/Kong) │
└────────┬────────┘
│
┌────────────┼────────────┐
│ │ │
▼ ▼ ▼
┌───────────┐ ┌───────────┐ ┌───────────┐
│ User │ │ Order │ │ Product │
│ Service │ │ Service │ │ Service │
└─────┬─────┘ └─────┬─────┘ └─────┬─────┘
│ │ │
│ │ │
▼ ▼ ▼
┌───────────┐ ┌───────────┐ ┌───────────┐
│ MySQL │ │ Redis │ │ MongoDB │
│ (用户库) │ │ (缓存) │ │ (商品库) │
└───────────┘ └───────────┘ └───────────┘用户服务部署
yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: user-service-config
namespace: production
data:
application.yml: |
server:
port: 8080
spring:
datasource:
url: jdbc:mysql://mysql:3306/user_db
username: root
driver-class-name: com.mysql.cj.jdbc.Driver
redis:
host: redis
port: 6379
jpa:
hibernate:
ddl-auto: update
show-sql: false
logging:
level:
root: INFO
com.example.user: DEBUG
management:
endpoints:
web:
exposure:
include: health,info,metrics,prometheus
metrics:
export:
prometheus:
enabled: true
---
apiVersion: v1
kind: Secret
metadata:
name: user-service-secret
namespace: production
type: Opaque
stringData:
db-password: "your-db-password"
redis-password: "your-redis-password"
jwt-secret: "your-jwt-secret-key"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-service
namespace: production
labels:
app: user-service
version: v1.0.0
spec:
replicas: 3
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
selector:
matchLabels:
app: user-service
template:
metadata:
labels:
app: user-service
version: v1.0.0
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "8080"
prometheus.io/path: "/actuator/prometheus"
spec:
serviceAccountName: user-service-sa
containers:
- name: user-service
image: registry.example.com/user-service:v1.0.0
ports:
- containerPort: 8080
name: http
- containerPort: 8081
name: management
env:
- name: SPRING_CONFIG_LOCATION
value: "classpath:/application.yml,/app/config/application.yml"
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: user-service-secret
key: db-password
- name: REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: user-service-secret
key: redis-password
- name: JWT_SECRET
valueFrom:
secretKeyRef:
name: user-service-secret
key: jwt-secret
- name: JAVA_OPTS
value: "-Xms1g -Xmx2g -XX:+UseG1GC -XX:MaxGCPauseMillis=200"
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
resources:
requests:
cpu: "500m"
memory: "1Gi"
limits:
cpu: "2000m"
memory: "2Gi"
livenessProbe:
httpGet:
path: /actuator/health/liveness
port: 8080
initialDelaySeconds: 60
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
readinessProbe:
httpGet:
path: /actuator/health/readiness
port: 8080
initialDelaySeconds: 30
periodSeconds: 5
timeoutSeconds: 3
failureThreshold: 3
volumeMounts:
- name: config
mountPath: /app/config
readOnly: true
- name: logs
mountPath: /app/logs
lifecycle:
preStop:
exec:
command:
- /bin/sh
- -c
- "sleep 15"
volumes:
- name: config
configMap:
name: user-service-config
- name: logs
emptyDir: {}
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchLabels:
app: user-service
topologyKey: kubernetes.io/hostname
---
apiVersion: v1
kind: Service
metadata:
name: user-service
namespace: production
labels:
app: user-service
spec:
type: ClusterIP
selector:
app: user-service
ports:
- name: http
port: 8080
targetPort: 8080
- name: management
port: 8081
targetPort: 8081
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: user-service-sa
namespace: production
automountServiceAccountToken: false订单服务部署
yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: order-service-config
namespace: production
data:
application.yml: |
server:
port: 8080
spring:
datasource:
url: jdbc:mysql://mysql:3306/order_db
username: root
driver-class-name: com.mysql.cj.jdbc.Driver
kafka:
bootstrap-servers: kafka:9092
consumer:
group-id: order-service-group
auto-offset-reset: earliest
producer:
retries: 3
acks: all
jpa:
hibernate:
ddl-auto: update
show-sql: false
feign:
client:
config:
default:
connectTimeout: 5000
readTimeout: 5000
user-service:
url: http://user-service:8080
product-service:
url: http://product-service:8080
resilience4j:
circuitbreaker:
instances:
userService:
slidingWindowSize: 10
failureRateThreshold: 50
waitDurationInOpenState: 60000
productService:
slidingWindowSize: 10
failureRateThreshold: 50
waitDurationInOpenState: 60000
---
apiVersion: v1
kind: Secret
metadata:
name: order-service-secret
namespace: production
type: Opaque
stringData:
db-password: "your-db-password"
kafka-password: "your-kafka-password"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: order-service
namespace: production
labels:
app: order-service
version: v1.0.0
spec:
replicas: 5
selector:
matchLabels:
app: order-service
template:
metadata:
labels:
app: order-service
version: v1.0.0
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "8080"
prometheus.io/path: "/actuator/prometheus"
spec:
containers:
- name: order-service
image: registry.example.com/order-service:v1.0.0
ports:
- containerPort: 8080
name: http
env:
- name: SPRING_CONFIG_LOCATION
value: "classpath:/application.yml,/app/config/application.yml"
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: order-service-secret
key: db-password
- name: KAFKA_PASSWORD
valueFrom:
secretKeyRef:
name: order-service-secret
key: kafka-password
- name: JAVA_OPTS
value: "-Xms2g -Xmx4g -XX:+UseG1GC"
resources:
requests:
cpu: "1000m"
memory: "2Gi"
limits:
cpu: "4000m"
memory: "4Gi"
livenessProbe:
httpGet:
path: /actuator/health/liveness
port: 8080
initialDelaySeconds: 60
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
readinessProbe:
httpGet:
path: /actuator/health/readiness
port: 8080
initialDelaySeconds: 30
periodSeconds: 5
timeoutSeconds: 3
failureThreshold: 3
volumeMounts:
- name: config
mountPath: /app/config
readOnly: true
volumes:
- name: config
configMap:
name: order-service-config
---
apiVersion: v1
kind: Service
metadata:
name: order-service
namespace: production
labels:
app: order-service
spec:
type: ClusterIP
selector:
app: order-service
ports:
- name: http
port: 8080
targetPort: 8080商品服务部署
yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: product-service-config
namespace: production
data:
application.yml: |
server:
port: 8080
spring:
data:
mongodb:
uri: mongodb://mongodb:27017/product_db
redis:
host: redis
port: 6379
cache:
time-to-live: 600000
logging:
level:
root: INFO
com.example.product: DEBUG
---
apiVersion: v1
kind: Secret
metadata:
name: product-service-secret
namespace: production
type: Opaque
stringData:
mongodb-password: "your-mongodb-password"
redis-password: "your-redis-password"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: product-service
namespace: production
labels:
app: product-service
version: v1.0.0
spec:
replicas: 3
selector:
matchLabels:
app: product-service
template:
metadata:
labels:
app: product-service
version: v1.0.0
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "8080"
prometheus.io/path: "/actuator/prometheus"
spec:
containers:
- name: product-service
image: registry.example.com/product-service:v1.0.0
ports:
- containerPort: 8080
name: http
env:
- name: SPRING_CONFIG_LOCATION
value: "classpath:/application.yml,/app/config/application.yml"
- name: MONGODB_PASSWORD
valueFrom:
secretKeyRef:
name: product-service-secret
key: mongodb-password
- name: REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: product-service-secret
key: redis-password
- name: JAVA_OPTS
value: "-Xms1g -Xmx2g -XX:+UseG1GC"
resources:
requests:
cpu: "500m"
memory: "1Gi"
limits:
cpu: "2000m"
memory: "2Gi"
livenessProbe:
httpGet:
path: /actuator/health/liveness
port: 8080
initialDelaySeconds: 60
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
readinessProbe:
httpGet:
path: /actuator/health/readiness
port: 8080
initialDelaySeconds: 30
periodSeconds: 5
timeoutSeconds: 3
failureThreshold: 3
volumeMounts:
- name: config
mountPath: /app/config
readOnly: true
volumes:
- name: config
configMap:
name: product-service-config
---
apiVersion: v1
kind: Service
metadata:
name: product-service
namespace: production
labels:
app: product-service
spec:
type: ClusterIP
selector:
app: product-service
ports:
- name: http
port: 8080
targetPort: 8080服务间通信
HTTP/REST通信
yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: feign-config
namespace: production
data:
application.yml: |
feign:
client:
config:
default:
connectTimeout: 5000
readTimeout: 5000
loggerLevel: BASIC
user-service:
url: http://user-service:8080
connectTimeout: 3000
readTimeout: 3000
product-service:
url: http://product-service:8080
connectTimeout: 3000
readTimeout: 3000
compression:
request:
enabled: true
mime-types: text/xml,application/xml,application/json
response:
enabled: true
hystrix:
enabled: true
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-service-communication
namespace: production
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
app: order-service
- podSelector:
matchLabels:
app: user-service
- podSelector:
matchLabels:
app: product-service
ports:
- protocol: TCP
port: 8080
egress:
- to:
- podSelector:
matchLabels:
app: user-service
- podSelector:
matchLabels:
app: order-service
- podSelector:
matchLabels:
app: product-service
ports:
- protocol: TCP
port: 8080gRPC通信
yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: grpc-config
namespace: production
data:
application.yml: |
grpc:
server:
port: 9090
client:
user-service:
address: static://user-service:9090
negotiationType: PLAINTEXT
enableKeepAlive: true
keepAliveWithoutCalls: true
product-service:
address: static://product-service:9090
negotiationType: PLAINTEXT
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-service-grpc
namespace: production
spec:
replicas: 3
selector:
matchLabels:
app: user-service-grpc
template:
metadata:
labels:
app: user-service-grpc
spec:
containers:
- name: user-service
image: registry.example.com/user-service-grpc:v1.0.0
ports:
- containerPort: 9090
name: grpc
resources:
requests:
cpu: "500m"
memory: "512Mi"
limits:
cpu: "2000m"
memory: "2Gi"
---
apiVersion: v1
kind: Service
metadata:
name: user-service-grpc
namespace: production
spec:
type: ClusterIP
selector:
app: user-service-grpc
ports:
- name: grpc
port: 9090
targetPort: 9090消息队列通信
yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: kafka-config
namespace: production
data:
application.yml: |
spring:
kafka:
bootstrap-servers: kafka:9092
producer:
retries: 3
acks: all
batch-size: 16384
buffer-memory: 33554432
key-serializer: org.apache.kafka.common.serialization.StringSerializer
value-serializer: org.apache.kafka.common.serialization.StringSerializer
consumer:
group-id: order-service-group
auto-offset-reset: earliest
enable-auto-commit: false
key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
value-deserializer: org.apache.kafka.common.serialization.StringDeserializer
listener:
ack-mode: manual
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: kafka-producer
namespace: production
spec:
replicas: 2
selector:
matchLabels:
app: kafka-producer
template:
metadata:
labels:
app: kafka-producer
spec:
containers:
- name: kafka-producer
image: registry.example.com/kafka-producer:v1.0.0
ports:
- containerPort: 8080
env:
- name: KAFKA_BOOTSTRAP_SERVERS
value: "kafka:9092"
resources:
requests:
cpu: "200m"
memory: "256Mi"
limits:
cpu: "500m"
memory: "512Mi"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: kafka-consumer
namespace: production
spec:
replicas: 3
selector:
matchLabels:
app: kafka-consumer
template:
metadata:
labels:
app: kafka-consumer
spec:
containers:
- name: kafka-consumer
image: registry.example.com/kafka-consumer:v1.0.0
ports:
- containerPort: 8080
env:
- name: KAFKA_BOOTSTRAP_SERVERS
value: "kafka:9092"
- name: KAFKA_GROUP_ID
value: "order-service-group"
resources:
requests:
cpu: "200m"
memory: "256Mi"
limits:
cpu: "500m"
memory: "512Mi"配置管理
ConfigMap管理
yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
namespace: production
data:
APP_NAME: "my-application"
APP_ENV: "production"
LOG_LEVEL: "INFO"
MAX_CONNECTIONS: "100"
TIMEOUT: "30"
FEATURE_FLAG_NEW_UI: "true"
CACHE_TTL: "3600"
RATE_LIMIT: "1000"
---
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-config
namespace: production
data:
nginx.conf: |
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log notice;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
gzip on;
gzip_vary on;
gzip_min_length 1024;
gzip_types text/plain text/css text/xml text/javascript
application/json application/javascript application/xml+rss;
upstream user_service {
server user-service:8080;
}
upstream order_service {
server order-service:8080;
}
upstream product_service {
server product-service:8080;
}
server {
listen 80;
server_name _;
location /api/users {
proxy_pass http://user_service;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location /api/orders {
proxy_pass http://order_service;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location /api/products {
proxy_pass http://product_service;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
}Secret管理
yaml
apiVersion: v1
kind: Secret
metadata:
name: db-credentials
namespace: production
type: Opaque
stringData:
mysql-root-password: "your-mysql-root-password"
mysql-user: "app_user"
mysql-password: "your-mysql-password"
---
apiVersion: v1
kind: Secret
metadata:
name: redis-credentials
namespace: production
type: Opaque
stringData:
redis-password: "your-redis-password"
---
apiVersion: v1
kind: Secret
metadata:
name: jwt-secret
namespace: production
type: Opaque
stringData:
jwt-secret-key: "your-jwt-secret-key-at-least-256-bits"
---
apiVersion: v1
kind: Secret
metadata:
name: tls-secret
namespace: production
type: kubernetes.io/tls
stringData:
tls.crt: |
-----BEGIN CERTIFICATE-----
MIIC2DCCAcCgAwIBAgIBATANBgkqhkiG9w0BAQsFADA8MQswCQYDVQQGEwJVUzEP
MA0GA1UECgwGR29vZ2xlMRQwEgYDVQQDDAtsb2NhbGhvc3QwHhcNMjQwMTAxMDAw
MDAwWhcNMjUwMTAxMDAwMDAwWjA8MQswCQYDVQQGEwJVUzEPMA0GA1UECgwGR29v
Z2xlMRQwEgYDVQQDDAtsb2NhbGhvc3QwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAw
ggEKAoIBAQC5HXa1Wa3R5C5hL5v5V5V5V5V5V5V5V5V5V5V5V5V5V5V5V5V5V5V5
-----END CERTIFICATE-----
tls.key: |
-----BEGIN PRIVATE KEY-----
MIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQC5HXa1Wa3R5C5h
L5v5V5V5V5V5V5V5V5V5V5V5V5V5V5V5V5V5V5V5V5V5V5V5V5V5V5V5V5V5V5V5
-----END PRIVATE KEY-----配置热更新
yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config-v2
namespace: production
data:
application.yml: |
app:
name: my-application
version: 2.0.0
features:
new-ui: true
dark-mode: true
notifications: true
cache:
enabled: true
ttl: 7200
rate-limit:
enabled: true
requests: 2000
window: 60
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-deployment
namespace: production
spec:
replicas: 3
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
annotations:
config-hash: "v2"
spec:
containers:
- name: app
image: registry.example.com/my-app:v2.0.0
ports:
- containerPort: 8080
volumeMounts:
- name: config
mountPath: /app/config
readOnly: true
volumes:
- name: config
configMap:
name: app-config-v2kubectl操作命令
服务部署管理
bash
# 部署服务
kubectl apply -f user-service.yaml
kubectl apply -f order-service.yaml
kubectl apply -f product-service.yaml
# 查看部署状态
kubectl get deployments -n production
# 查看Pod状态
kubectl get pods -n production -o wide
# 查看服务状态
kubectl get services -n production
# 查看服务详情
kubectl describe service user-service -n production
# 查看Pod日志
kubectl logs -f deployment/user-service -n production
# 查看多个Pod日志
kubectl logs -f -l app=user-service -n production
# 进入Pod容器
kubectl exec -it deployment/user-service -n production -- /bin/sh
# 查看Pod环境变量
kubectl exec deployment/user-service -n production -- env
# 查看Pod配置
kubectl exec deployment/user-service -n production -- cat /app/config/application.ymlConfigMap和Secret管理
bash
# 创建ConfigMap
kubectl create configmap app-config --from-literal=APP_NAME=my-app --from-literal=APP_ENV=prod -n production
# 从文件创建ConfigMap
kubectl create configmap nginx-config --from-file=nginx.conf -n production
# 查看ConfigMap
kubectl get configmaps -n production
# 查看ConfigMap详情
kubectl describe configmap app-config -n production
# 编辑ConfigMap
kubectl edit configmap app-config -n production
# 创建Secret
kubectl create secret generic db-credentials --from-literal=mysql-password=your-password -n production
# 从文件创建Secret
kubectl create secret generic tls-secret --from-file=tls.crt --from-file=tls.key -n production
# 查看Secret
kubectl get secrets -n production
# 查看Secret详情
kubectl describe secret db-credentials -n production
# 解码Secret
kubectl get secret db-credentials -n production -o jsonpath='{.data.mysql-password}' | base64 --decode
# 删除ConfigMap
kubectl delete configmap app-config -n production
# 删除Secret
kubectl delete secret db-credentials -n production服务间通信测试
bash
# 测试服务DNS解析
kubectl run test-dns --image=busybox -n production --rm -it -- nslookup user-service
# 测试服务连通性
kubectl run test-curl --image=curlimages/curl -n production --rm -it -- curl http://user-service:8080/actuator/health
# 测试服务API
kubectl run test-api --image=curlimages/curl -n production --rm -it -- curl -X GET http://user-service:8080/api/users/1
# 测试服务间调用
kubectl exec deployment/order-service -n production -- curl http://user-service:8080/actuator/health
# 查看服务端点
kubectl get endpoints -n production
# 查看服务端点详情
kubectl describe endpoints user-service -n production
# 测试负载均衡
for i in {1..10}; do
kubectl run test-$i --image=curlimages/curl -n production --rm -it -- curl -s http://user-service:8080/api/info | grep hostname
done服务扩缩容
bash
# 手动扩容
kubectl scale deployment user-service --replicas=5 -n production
# 手动缩容
kubectl scale deployment user-service --replicas=2 -n production
# 自动扩缩容
kubectl autoscale deployment user-service --cpu-percent=70 --min=3 --max=10 -n production
# 查看HPA状态
kubectl get hpa -n production
# 查看HPA详情
kubectl describe hpa user-service -n production
# 删除HPA
kubectl delete hpa user-service -n production服务更新和回滚
bash
# 更新镜像
kubectl set image deployment/user-service user-service=registry.example.com/user-service:v2.0.0 -n production
# 查看更新状态
kubectl rollout status deployment/user-service -n production
# 查看更新历史
kubectl rollout history deployment/user-service -n production
# 回滚到上一个版本
kubectl rollout undo deployment/user-service -n production
# 回滚到指定版本
kubectl rollout undo deployment/user-service --to-revision=2 -n production
# 查看Pod更新过程
kubectl get pods -n production -w
# 暂停更新
kubectl rollout pause deployment/user-service -n production
# 恢复更新
kubectl rollout resume deployment/user-service -n production实践示例
示例1:电商微服务系统部署
场景描述
部署一个完整的电商微服务系统,包括用户服务、订单服务、商品服务、支付服务、通知服务等。
部署配置
yaml
# API网关
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-gateway
namespace: ecommerce
spec:
replicas: 3
selector:
matchLabels:
app: api-gateway
template:
metadata:
labels:
app: api-gateway
spec:
containers:
- name: api-gateway
image: registry.example.com/ecommerce/api-gateway:v1.0.0
ports:
- containerPort: 8080
env:
- name: USER_SERVICE_URL
value: "http://user-service:8080"
- name: ORDER_SERVICE_URL
value: "http://order-service:8080"
- name: PRODUCT_SERVICE_URL
value: "http://product-service:8080"
- name: PAYMENT_SERVICE_URL
value: "http://payment-service:8080"
resources:
requests:
cpu: "500m"
memory: "512Mi"
limits:
cpu: "2000m"
memory: "2Gi"
livenessProbe:
httpGet:
path: /actuator/health
port: 8080
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /actuator/health
port: 8080
initialDelaySeconds: 10
periodSeconds: 5
---
apiVersion: v1
kind: Service
metadata:
name: api-gateway
namespace: ecommerce
spec:
type: LoadBalancer
selector:
app: api-gateway
ports:
- port: 80
targetPort: 8080
---
# 支付服务
apiVersion: apps/v1
kind: Deployment
metadata:
name: payment-service
namespace: ecommerce
spec:
replicas: 3
selector:
matchLabels:
app: payment-service
template:
metadata:
labels:
app: payment-service
spec:
containers:
- name: payment-service
image: registry.example.com/ecommerce/payment-service:v1.0.0
ports:
- containerPort: 8080
env:
- name: STRIPE_API_KEY
valueFrom:
secretKeyRef:
name: payment-secret
key: stripe-api-key
- name: PAYPAL_CLIENT_ID
valueFrom:
secretKeyRef:
name: payment-secret
key: paypal-client-id
- name: PAYPAL_CLIENT_SECRET
valueFrom:
secretKeyRef:
name: payment-secret
key: paypal-client-secret
resources:
requests:
cpu: "500m"
memory: "512Mi"
limits:
cpu: "2000m"
memory: "2Gi"
---
apiVersion: v1
kind: Service
metadata:
name: payment-service
namespace: ecommerce
spec:
type: ClusterIP
selector:
app: payment-service
ports:
- port: 8080
targetPort: 8080
---
# 通知服务
apiVersion: apps/v1
kind: Deployment
metadata:
name: notification-service
namespace: ecommerce
spec:
replicas: 2
selector:
matchLabels:
app: notification-service
template:
metadata:
labels:
app: notification-service
spec:
containers:
- name: notification-service
image: registry.example.com/ecommerce/notification-service:v1.0.0
ports:
- containerPort: 8080
env:
- name: SMTP_HOST
value: "smtp.example.com"
- name: SMTP_PORT
value: "587"
- name: SMTP_USER
valueFrom:
secretKeyRef:
name: notification-secret
key: smtp-user
- name: SMTP_PASSWORD
valueFrom:
secretKeyRef:
name: notification-secret
key: smtp-password
resources:
requests:
cpu: "200m"
memory: "256Mi"
limits:
cpu: "500m"
memory: "512Mi"
---
apiVersion: v1
kind: Service
metadata:
name: notification-service
namespace: ecommerce
spec:
type: ClusterIP
selector:
app: notification-service
ports:
- port: 8080
targetPort: 8080部署命令
bash
# 创建命名空间
kubectl create namespace ecommerce
# 部署所有服务
kubectl apply -f api-gateway.yaml
kubectl apply -f user-service.yaml
kubectl apply -f order-service.yaml
kubectl apply -f product-service.yaml
kubectl apply -f payment-service.yaml
kubectl apply -f notification-service.yaml
# 查看部署状态
kubectl get all -n ecommerce
# 测试服务访问
kubectl run test-client --image=curlimages/curl -n ecommerce --rm -it -- curl http://api-gateway/api/health
# 查看服务日志
kubectl logs -f deployment/api-gateway -n ecommerce
# 测试服务间通信
kubectl exec deployment/api-gateway -n ecommerce -- curl http://user-service:8080/actuator/health示例2:服务网格部署
场景描述
使用Istio服务网格部署微服务,实现流量管理、熔断降级、金丝雀发布等功能。
部署配置
yaml
# 用户服务 - 启用Istio Sidecar
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-service
namespace: production
spec:
replicas: 3
selector:
matchLabels:
app: user-service
template:
metadata:
labels:
app: user-service
version: v1
annotations:
sidecar.istio.io/inject: "true"
spec:
containers:
- name: user-service
image: registry.example.com/user-service:v1.0.0
ports:
- containerPort: 8080
resources:
requests:
cpu: "500m"
memory: "512Mi"
limits:
cpu: "2000m"
memory: "2Gi"
---
apiVersion: v1
kind: Service
metadata:
name: user-service
namespace: production
spec:
type: ClusterIP
selector:
app: user-service
ports:
- name: http
port: 8080
targetPort: 8080
---
# Istio DestinationRule - 熔断配置
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: user-service
namespace: production
spec:
host: user-service
trafficPolicy:
connectionPool:
tcp:
maxConnections: 100
http:
h2UpgradePolicy: UPGRADE
http1MaxPendingRequests: 100
http2MaxRequests: 1000
outlierDetection:
consecutive5xxErrors: 5
interval: 30s
baseEjectionTime: 30s
maxEjectionPercent: 50
tls:
mode: ISTIO_MUTUAL
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
---
# Istio VirtualService - 金丝雀发布
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: user-service
namespace: production
spec:
hosts:
- user-service
http:
- route:
- destination:
host: user-service
subset: v1
weight: 90
- destination:
host: user-service
subset: v2
weight: 10
retries:
attempts: 3
perTryTimeout: 2s
retryOn: gateway-error,connect-failure,refused-stream
---
# Istio Gateway - 外部访问
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: user-service-gateway
namespace: production
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "user.example.com"
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: SIMPLE
credentialName: user-service-tls
hosts:
- "user.example.com"
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: user-service-external
namespace: production
spec:
hosts:
- "user.example.com"
gateways:
- user-service-gateway
http:
- route:
- destination:
host: user-service
port:
number: 8080管理命令
bash
# 部署Istio配置
kubectl apply -f user-service-istio.yaml
# 查看Istio配置
kubectl get destinationrule -n production
kubectl get virtualservice -n production
kubectl get gateway -n production
# 查看Istio代理状态
kubectl exec deployment/user-service -n production -- curl http://localhost:15000/help
# 查看Istio指标
kubectl exec deployment/user-service -n production -- curl http://localhost:15000/stats
# 测试金丝雀发布
for i in {1..100}; do
kubectl run test-$i --image=curlimages/curl -n production --rm -it -- curl -s http://user-service:8080/api/version
done | grep -c "v2"
# 查看Istio日志
kubectl logs -f deployment/user-service -n production -c istio-proxy示例3:多环境配置管理
场景描述
为开发、测试、生产环境配置不同的ConfigMap和Secret,实现环境隔离和配置管理。
部署配置
yaml
# 开发环境配置
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
namespace: development
data:
application.yml: |
app:
name: my-app
env: development
debug: true
database:
host: mysql-dev
port: 3306
name: app_dev
redis:
host: redis-dev
port: 6379
logging:
level: DEBUG
---
apiVersion: v1
kind: Secret
metadata:
name: app-secret
namespace: development
type: Opaque
stringData:
db-password: "dev-password"
redis-password: "dev-redis-password"
api-key: "dev-api-key"
---
# 测试环境配置
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
namespace: staging
data:
application.yml: |
app:
name: my-app
env: staging
debug: false
database:
host: mysql-staging
port: 3306
name: app_staging
redis:
host: redis-staging
port: 6379
logging:
level: INFO
---
apiVersion: v1
kind: Secret
metadata:
name: app-secret
namespace: staging
type: Opaque
stringData:
db-password: "staging-password"
redis-password: "staging-redis-password"
api-key: "staging-api-key"
---
# 生产环境配置
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
namespace: production
data:
application.yml: |
app:
name: my-app
env: production
debug: false
database:
host: mysql-prod
port: 3306
name: app_prod
redis:
host: redis-prod
port: 6379
logging:
level: WARN
---
apiVersion: v1
kind: Secret
metadata:
name: app-secret
namespace: production
type: Opaque
stringData:
db-password: "prod-password"
redis-password: "prod-redis-password"
api-key: "prod-api-key"
---
# Kustomize配置 - overlays/development/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: development
resources:
- ../../base
configMapGenerator:
- name: app-config
behavior: merge
files:
- application.yml
secretGenerator:
- name: app-secret
behavior: merge
type: Opaque
files:
- db-password
- redis-password
- api-key
---
# Kustomize配置 - overlays/production/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: production
resources:
- ../../base
configMapGenerator:
- name: app-config
behavior: merge
files:
- application.yml
secretGenerator:
- name: app-secret
behavior: merge
type: Opaque
files:
- db-password
- redis-password
- api-key
commonLabels:
environment: production部署命令
bash
# 使用Kustomize部署开发环境
kubectl apply -k overlays/development
# 使用Kustomize部署测试环境
kubectl apply -k overlays/staging
# 使用Kustomize部署生产环境
kubectl apply -k overlays/production
# 查看不同环境的配置
kubectl get configmap app-config -n development -o yaml
kubectl get configmap app-config -n staging -o yaml
kubectl get configmap app-config -n production -o yaml
# 比较环境差异
diff <(kubectl get configmap app-config -n development -o yaml) <(kubectl get configmap app-config -n production -o yaml)
# 验证配置生效
kubectl exec deployment/my-app -n development -- cat /app/config/application.yml
kubectl exec deployment/my-app -n production -- cat /app/config/application.yml故障排查指南
常见问题1:服务无法访问
症状
- 服务内部无法访问
- DNS解析失败
- 连接超时
排查步骤
bash
# 检查服务状态
kubectl get services -n production
# 检查服务端点
kubectl get endpoints -n production
# 检查Pod标签
kubectl get pods -n production --show-labels
# 测试DNS解析
kubectl run test-dns --image=busybox -n production --rm -it -- nslookup user-service
# 测试服务连通性
kubectl run test-curl --image=curlimages/curl -n production --rm -it -- curl -v http://user-service:8080/actuator/health
# 检查网络策略
kubectl get networkpolicy -n production
# 查看Pod网络
kubectl exec deployment/user-service -n production -- netstat -tlnp解决方案
yaml
# 检查Service selector是否正确
spec:
selector:
app: user-service # 确保与Pod标签匹配
# 检查端口配置
spec:
ports:
- port: 8080
targetPort: 8080 # 确保与容器端口匹配常见问题2:ConfigMap更新不生效
症状
- ConfigMap更新后,应用配置未变化
- 需要重启Pod才能生效
排查步骤
bash
# 查看ConfigMap内容
kubectl get configmap app-config -n production -o yaml
# 查看Pod挂载的ConfigMap
kubectl exec deployment/my-app -n production -- ls -la /app/config
# 查看Pod内的配置文件
kubectl exec deployment/my-app -n production -- cat /app/config/application.yml
# 查看ConfigMap更新时间
kubectl get configmap app-config -n production -o jsonpath='{.metadata.creationTimestamp}'
# 查看Pod启动时间
kubectl get pods -n production -o jsonpath='{.items[0].metadata.creationTimestamp}'解决方案
yaml
# 方案1:使用配置版本号
metadata:
annotations:
config-version: "v2"
# 方案2:使用Reloader自动重启
# 安装Reloader
kubectl apply -f https://raw.githubusercontent.com/stakater/Reloader/master/deployments/kubernetes/reloader.yaml
# 在Deployment中添加注解
metadata:
annotations:
configmap.reloader.stakater.com/reload: "app-config"常见问题3:服务间调用失败
症状
- 服务A调用服务B失败
- 超时或连接拒绝
排查步骤
bash
# 检查服务B状态
kubectl get pods -l app=service-b -n production
# 检查服务B日志
kubectl logs -f deployment/service-b -n production
# 测试服务B健康检查
kubectl exec deployment/service-a -n production -- curl http://service-b:8080/actuator/health
# 检查网络策略
kubectl describe networkpolicy -n production
# 查看服务B端点
kubectl get endpoints service-b -n production
# 检查服务B资源使用
kubectl top pods -l app=service-b -n production
# 查看服务B事件
kubectl get events -n production --field-selector involvedObject.name=service-b解决方案
yaml
# 检查网络策略是否阻止了流量
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-service-a-to-b
spec:
podSelector:
matchLabels:
app: service-b
ingress:
- from:
- podSelector:
matchLabels:
app: service-a
ports:
- protocol: TCP
port: 8080
# 增加超时时间
feign:
client:
config:
service-b:
connectTimeout: 10000
readTimeout: 10000常见问题4:Secret解码失败
症状
- Secret挂载失败
- 应用启动失败
排查步骤
bash
# 查看Secret
kubectl get secret app-secret -n production -o yaml
# 解码Secret
kubectl get secret app-secret -n production -o jsonpath='{.data.db-password}' | base64 --decode
# 检查Secret类型
kubectl get secret app-secret -n production -o jsonpath='{.type}'
# 查看Pod事件
kubectl describe pod <pod-name> -n production
# 检查Secret引用
kubectl get deployment my-app -n production -o yaml | grep -A 5 secretKeyRef解决方案
yaml
# 确保Secret类型正确
apiVersion: v1
kind: Secret
metadata:
name: app-secret
type: Opaque # 或其他类型如 kubernetes.io/tls
data:
db-password: <base64-encoded-password>
# 确保Secret引用正确
env:
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: app-secret # Secret名称
key: db-password # Secret中的key常见问题5:服务更新失败
症状
- 新版本Pod启动失败
- 滚动更新卡住
排查步骤
bash
# 查看Deployment状态
kubectl describe deployment my-app -n production
# 查看Pod状态
kubectl get pods -l app=my-app -n production
# 查看新Pod日志
kubectl logs <new-pod-name> -n production
# 查看Pod事件
kubectl describe pod <new-pod-name> -n production
# 查看更新历史
kubectl rollout history deployment/my-app -n production
# 查看资源配额
kubectl describe resourcequota -n production
# 查看节点资源
kubectl describe nodes解决方案
bash
# 回滚到上一个版本
kubectl rollout undo deployment/my-app -n production
# 回滚到指定版本
kubectl rollout undo deployment/my-app --to-revision=2 -n production
# 暂停更新
kubectl rollout pause deployment/my-app -n production
# 查看详细错误
kubectl logs <failed-pod> -n production --previous最佳实践建议
1. 服务部署最佳实践
资源配置
yaml
resources:
requests:
cpu: "500m" # 根据实际使用设置
memory: "512Mi" # 留有20-30%余量
limits:
cpu: "2000m" # 不超过requests的4倍
memory: "2Gi" # 不超过requests的4倍健康检查
yaml
livenessProbe:
httpGet:
path: /actuator/health/liveness
port: 8080
initialDelaySeconds: 60
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
readinessProbe:
httpGet:
path: /actuator/health/readiness
port: 8080
initialDelaySeconds: 30
periodSeconds: 5
timeoutSeconds: 3
failureThreshold: 3优雅关闭
yaml
lifecycle:
preStop:
exec:
command:
- /bin/sh
- -c
- "sleep 15"2. 配置管理最佳实践
ConfigMap分离
yaml
# 应用配置
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
namespace: production
data:
application.yml: |
app:
name: my-app
version: 1.0.0
---
# 环境配置
apiVersion: v1
kind: ConfigMap
metadata:
name: env-config
namespace: production
data:
APP_ENV: "production"
LOG_LEVEL: "INFO"Secret加密
bash
# 使用KMS加密Secret
kubectl create secret generic my-secret \
--from-literal=password=my-password \
--namespace production \
--encrypt
# 使用Sealed Secrets
kubeseal --format=yaml < secret.yaml > sealed-secret.yaml3. 服务通信最佳实践
服务发现
yaml
# 使用Service名称访问
env:
- name: USER_SERVICE_URL
value: "http://user-service:8080"
# 使用DNS名称访问
env:
- name: USER_SERVICE_URL
value: "http://user-service.production.svc.cluster.local:8080"超时和重试
yaml
feign:
client:
config:
default:
connectTimeout: 5000
readTimeout: 5000
resilience4j:
retry:
instances:
userService:
maxRetryAttempts: 3
waitDuration: 1000
exponentialBackoffMultiplier: 24. 安全最佳实践
ServiceAccount
yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: my-service-account
namespace: production
automountServiceAccountToken: false
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: my-role
namespace: production
rules:
- apiGroups: [""]
resources: ["configmaps", "secrets"]
verbs: ["get", "list"]网络策略
yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-all-ingress
namespace: production
spec:
podSelector: {}
policyTypes:
- Ingress5. 监控和日志最佳实践
应用指标
yaml
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "8080"
prometheus.io/path: "/actuator/prometheus"日志收集
yaml
spec:
containers:
- name: app
volumeMounts:
- name: logs
mountPath: /app/logs
env:
- name: LOG_PATH
value: "/app/logs"6. 部署策略最佳实践
滚动更新
yaml
spec:
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0蓝绿部署
yaml
# 使用两个Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app-blue
spec:
replicas: 3
selector:
matchLabels:
app: my-app
version: blue
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app-green
spec:
replicas: 3
selector:
matchLabels:
app: my-app
version: green
---
# Service切换
apiVersion: v1
kind: Service
metadata:
name: my-app
spec:
selector:
app: my-app
version: blue # 切换到green7. 多环境管理最佳实践
Kustomize
yaml
# base/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
replicas: 1
template:
spec:
containers:
- name: my-app
image: my-app:latest
---
# overlays/production/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: production
resources:
- ../../base
patchesStrategicMerge:
- deployment-patch.yaml
commonLabels:
environment: production总结
微服务部署是Kubernetes实战的核心内容,本章我们学习了:
- 多服务部署:用户服务、订单服务、商品服务的完整部署配置
- 服务间通信:HTTP/REST、gRPC、消息队列等多种通信方式
- 配置管理:ConfigMap、Secret的创建、更新和管理
- 实践示例:电商系统、服务网格、多环境配置的真实场景
- 故障排查:常见问题的诊断和解决方案
- 最佳实践:生产环境的部署经验和建议
通过本章的学习,您应该能够独立完成微服务应用的部署和管理,并具备处理常见问题的能力。