Remote state is (IMO) one of Terraform’s most powerful and unsung features. It’s also a feature that I notice a lot of first-time users (and unfortunately sometimes people who’ve even been using it for some time) tend to gloss over and ignore. For the first-time user, the light bulb usually goes off for the need of a solution that remote state provides when a scenario like this comes to pass:
Background In my last post, I talked about creating lean providers for maximum flexibility. As I closed out the post, I mentioned the potential peril of performing operations against the wrong account by virtue of having the AWS_PROFILE variable set for a profile matching an account other than the one you’re intending to work with (and believe me, if you work with more than a handful of accounts, this is a very easy mistake to make).
In the last 15 months, I have become a daily user of Terraform. In that time, several things have happened in my “relationship” to the tool: I have used it to build out some complex (as well as some not-so-complex) deployments (and learned a lot of lessons the hard way) I have been particularly impressed with the growth in the tool’s capability and HashiCorp’s commitment to pushing the tool forward by expanding its capability set as well as improving core functionality.
“Why am I so nervous? I haven’t felt like this since… I took the first one.” After five AWS exams (particularly the two pro exams) and a handful of other certification exams, I didn’t really expect to be nervous taking this exam. Oddly enough, this is exactly how I felt going in, during, and in that inexorable “moment” of time between clicking the Submit Exam button and actually seeing the result on the page.
Recently I have been working on a project to provision an Elasticsearch cluster for a client. With the recent release of v5.3.1 of the ELK stack, we decided to go ahead and build to that release. Things had been proceeding fairly smoothly, up until this evening when I got hung up with Logstash. The Elasticsearch repo RPM for Logstash is very good about intelligently deciding whether your system is using Upstart, SysVInit, or systemd as an init manager and placing the appropriate startup file.
In my last post, we discovered how to customize some of Datadog’s pre-packaged integrations to build actionable insights for SQL-backed applications quickly. In this post, we are going to dive a bit deeper and look at how we might integrate Datadog with pre-built/custom metrics tooling (such as a shell script, for example). As I wrapped up the last post, I alluded to a metric “gathered via a different route”; this metric keeps a count of the number of individual client processes running on the host.
TL;DR This is not a brain dump or a “how-to-pass exam X” post. There are plenty of those elsewhere already. A little over a year ago (last August, to be exact), I unknowingly forked my career path. You may be asking how is that I came to cause such a major divergence while yet remaining unaware as to the fact that I had done so. The answer is both very simple and quite complicated: I got my first AWS certification.
“If you can’t measure it, you can’t improve it.” This quote, attributed to Peter Drucker, has held a lot of sway over the years in management circles where high-level business processes are created, iterated, tweaked, and refined to ensure that revenues (and hopefully, profits) lead to healthy, long-lasting companies. At Blue Sentry, our Managed Services offering is built upon this same maxim. We rely on several tools to provide insight into customer environments; in this post, I am going to focus on one of my newfound favorites from this toolset – Datadog – and its ability to capture custom metrics.
Since I moved back in April, my server has been offline, and with it my old Wordpress-based blog. With my new home, there isn’t really anywhere convenient to run a noisy 1U server blade. This got me thinking about whether or not the server was really essential – of course I still wanted to host my blog somewhere, as well as some other services that I really like having hosted in-house: my own personal Seafile server and email among them.