If you run kubernetes in production, you most likely do not want to loose your application logs. Kubernetes provides no native mechanism to rotate logs (yet).
There are several blog articles, on how to implement cluster-logging.
Most of them use a side-car container approach to forward stdout
and stderr
to some kind of log collector.
For me that sounds like a waste of resource, because Docker itself implements a bunch of (13 so far) log driver options including fluentd
and awslogs
.
In my project we used kops
to setup Kubernetes on AWS. The convenient tool performs all tasks required to get a running kubernetes cluster.
However, it is a bit tricky to change Docker daemon configurations:
Setting $DOCKER_OPTS
would not work for me (and others).
You might know, that Kops uses Debian Jessy as basis for Amazon Machine Images.
This image has the Docker daemon installed in a way that you can control it via systemctl
.
When you checkout the output of
admin@your-awesome-host:~$ sudo systemctl show docker | grep Env
you will see
EnvironmentFile=/etc/sysconfig/docker (ignore_errors=no)
This is the place, where we have to put our log options for the Docker daemon.
The file looked for me like this:
DOCKER_OPTS="--ip-masq=false --iptables=false --log-level=warn --storage-driver=overlay "
DOCKER_NOFILE=1000000
Append the the options you want to have to the end of the first line:
DOCKER_OPTS="--ip-masq=false --iptables=false --log-level=warn --storage-driver=overlay --log-driver=awslogs --log-opt awslogs-region=eu-central-1 --log-opt awslogs-group=k8s"
DOCKER_NOFILE=1000000
Restart the docker daemon via sudo systemctl restart docker
.
Verify that the options are in place via
admin@your-awesome-host:~$ sudo docker info | grep Logging
Logging Driver: awslogs
Now we will need to allow the AWS instance(s) to push logs into AWS.
run kops edit <your cluster name>
and add the following as child of spec
:
additionalPolicies:
node: |
[
{
"Effect": "Allow",
"Action": ["logs:CreateLogGroup", "logs:CreateLogStream", "logs:PutLogEvents"],
"Resource": ["*"]
}
]
master: |
[
{
"Effect": "Allow",
"Action": ["logs:CreateLogGroup", "logs:CreateLogStream", "logs:PutLogEvents"],
"Resource": ["*"]
}
]
Preview and then make the changes to the cluster via kops update cluster <your cluster name>
.
Head to the AWS CloudWatch Console and check if you can see one log group (k8s in my case) and in there one stream per container (streams are named after their container ID).
Great! The only problem now is, that as soon as one kubernetes node crashes or we we scale our cluster, we will need to perform the docker daemon changes on each new instance again.
Automation is the solution!
The first idea that comes into mind is some kind of userscript
mechanism like we know it from the AWS Instance Boot Sequence.
Unfortunately, the work in the Kops project is in progress 1, 2, 3, and 4.
We cannot build our custom Amazon Machine Image, because the /etc/sysconfig/docker
file is created upon bootstrap, here.
So stay patient until this pull request is through.
If you are a monkey-patch person, you could consider to modify the AWS autoscaling launch configuration’s user data script. You can get it via
aws autoscaling describe-launch-configurations --launch-configuration-names nodes.<YOUR CLUSTER NAME>-<SOME ID>
You will need to base64 --decrypt
the userData
part.
Or you simply wait for the PR to be included in stable
.
blog comments powered by Disqus