I feel pondering hard Questions leads to more knowledge than just seeking answers. Here I'll try to strike a balance between then Questions I've had and the potentially correct Solutions to match.

Thursday, November 6, 2014

Setting up an Insecure Docker Registry

Running anything in an insecure mode is always dangerous. However, if the goal is to simply test something out or run in a secured environment, it can be useful.  Thus was my use case to learn about using the Docker Registry and for speed of not bothering with SSL Certificates, run it in an insecure mode.  

I'm not going to cover working with Docker in general, just setting up an insecure registry, head over to the documentation first and to learn more.  I'm just going to journal this problem so hopefully no one has to waste time figuring it out.

There are many, numerous, blog posts about setting up Docker's Registry(properly), and most go over setting up some sort of authentication(recommend). However if the risky insecure route is fine, there's a slight hicup I found which was rather opaque to solve.

So let's follow the basics of getting a localized Docker Registry running.

  1. Pull the registry image from docker hub
    • docker pull registry
  2. Run  the container with local environment(quieter output than default dev)
    • docker run -d -p 5000:5000 --name registry registry 
    • A docker ps should now show it running
    • You can check if it's running but hitting localhost:5000/ on your web browser, which should return:
      • "docker-registry server (dev) (v0.8.1)"
  3. Push an image to the repo prefixed with the registries address.  Try a basic `ubuntu` image to the registry on your local machine: localhost:5000
    1. One quick aside, the way Docker will switch from using the default DockerHub api is to prefix the image name with the new registry address.  So tag the basic 'ubuntu' image with 'localhost:5000/ubuntu'.
      • docker tag ubuntu localhost:5000/ubuntu   
      • docker images   to verify it worked
    2. Then push the tagged image
      • docker push localhost:5000/ubuntu
And here comes the error...


Error response from daemon: Invalid registry endpoint https://localhost:5000/v1/: Get https://localhost:5000/v1/_ping: EOF. If this private registry supports only HTTP or HTTPS with an unknown CA certificate, please add `--insecure-registry localhost:5000` to the daemon's arguments. In the case of HTTPS, if you have access to the registry's CA certificate, no need for the flag; simply place the CA certificate at /etc/docker/certs.d/localhost:5000/ca.crt


  
I though this was running in an open dev mode?

There are multiple environments which the registry can run under with varying settings, they can be seen in the provided config file.  However after digging through blogs, I eventually stumbled across this Github issue which flipped the light switch.

The error comes from the Docker daemon running in the normal mode which by default requires HTTPS for communicating with Registry APIs.  The 'daemon' is the the local machine's and not the registry's server (should have been drinking more coffee).  


Solution

Shut down your docker daemon.  On debian based distros: 

  • sudo service docker stop

Run the docker daemon with the insecure flags:  

  • sudo docker -d --insecure-registry localhost:5000

Start the `registry` container since it was stopped when the old daemon shutdown. 

  • docker start registry

and..
>>>>>>>>>>>>sudo docker push localhost:5000/ubuntu
The push refers to a repository [localhost:5000/ubuntu] (len: 1)
Sending image list
Pushing repository localhost:5000/ubuntu (1 tags)
511136ea3c5a: Image successfully pushed
d497ad3926c8: Image successfully pushed
ccb62158e970: Image successfully pushed
e791be0477f2: Image successfully pushed
3680052c0f5c: Image successfully pushed
22093c35d77b: Image successfully pushed
5506de2b643b: Image successfully pushed
Pushing tag for rev [5506de2b643b] on {http://localhost:5000/v1/repositories/ubuntu/tags/latest}

Now you can mess with an unsecured registry!

Hopefully this post might save people some time figuring out what is wrong. 

Cheers!
@joshroppo




Tuesday, May 6, 2014

Write the Docs 2014 NA: Volunteering Tales!


*ROUGH DRAFT* (Taking hints from all the great speakers and just getting something written!)
Getting to be part of NA Write the Docs conference was a really fun experience.  It was exhausting and a little stressful from time to time but things rolled smoothly and overall seemed to be a great success!



Volunteering role: my main task was to wrangle speakers by making sure they're checked in, and knew where they were needed to be.  The Crystal Ballroom's musically charged Green Room was the quiet prep room for speakers.  An email was sent to the speakers of its availability.  However, not all of them saw email/found the room, so finding and checking them in became the most difficult task.  Once the speaker was found by any of the conference organizers things calmed down since we knew they were at least there.  However I had some frantic searches comparing names with their fuzzy head shots to track people down.  In some cases there were no head shots and none of the organizers knew what the speaker looked like(uncommon worst case).

First day was the craziest as with doing anything for the first time.  Often desperately walking through the crowd attempting to identify speakers.
I apologize to all the attendees who were weirded out by my awkward glances at their name badge.
  •  Side note: I wish the badge lanyards were shorter to make gawking names easier.. 

Once a speaker was found, the general procedure was to..
  • Make sure the speaker knew of the Green Room('s location) and its quietness 
  • Give them their gifts: Coffee, Chocolate, and Hoodie!
  • If they chose to prep in the Green Room
    •  Act as their talk-time wake-up-call
    • Introduce speaker to the AV crew to ensure no surprises
  •  Generally make sure they were informed about what is going on and inform of any delays.
All of the speakers were incredibly kind and most fairly happy to find the quiet of the Green Room to collect their thoughts.  It wasn't completely necessary to coral the speakers but knowing they were ready and knowing that they knew when they needed to be on stage was comforting to all the organizers.  It also ensured that speaker transitions went fairly smooth.

Potential improvements, biggest would be easily tracking down the speakers and getting them checked in.  A few ideas Ruth and I bounced back and fourth a little bit while the conference was winding down:
  • Having mobile contact info for all the Speakers
  • Requiring mug shots of all speakers..
  • Different name-badges
  • At sign in keep their badges aside during registration so one of the organizers can get them informed/checked-in and get mental facial recognition established.
Write the Docs was the first formal tech conference I've been involved with and I'm glad I volunteered.  I have helped out with some PIGSquad Game-Jams in the past but those were less organizationally challenging and smaller scale. 

One of the biggest surprises for me was how polite and generally excited all the conference attendees were!  Everyone smiled while passing in the hallway and the atmosphere was exceptionally warm and inviting.  At all the tech events I've been to in the past there are generally a few difficult individuals and zones of silence which develop where no one is interested in talking.  Not the case at WtD!  It almost seemed that if there were magical universal time allowances for everyone, the conference and discussion would have gone on for days longer!

Eric, Troy, and Ruth definitely have found a group of people who are the unicorn merging of technical writers and developer who care about documenting the intellectual achievements of our Information Age and haven't had a place to congregate before.  That might partly explain the enthusiasm of the attendees.  All of the organizers deserve serious props for creating such an inviting atmosphere and great content! 


To be quite honest I wasn't insanely passionate about documentation before the conference.  My main opinion previously was from my former job which was: it really sucks when there's no good docs on systems.  On my last day I stayed up till 4AM to document the projects I was leaving behind, I didn't want to leave a vacuum of knowledge like I had already experienced.  Post-conf, seeing all of the great talks definitely has strengthened my appreciation for documentation.  Wherever I end up working next, I hope we write the docs in an efficient and appreciative fashion!

Nik ensuring Write The Docs legacy remains on the Crystal legends board!
Great times!

Hopefully when my brain has recovered I'll add more to this post as great moments trickle through my mind...

Friday, April 25, 2014

Python to Scala: Virtualenvs to sbt for project management

At PDX Scala on 2014/4/9, Thomas gave a great introduction to using sbt for simple to complex project management.  Most of my experience dealing with significant dev environments comes from the Python world using Virtualenv and its handy wrapper.

sbt big takeaways:

  • Fully unified tool built in Scala for project management and development
  • Tilda operators give scripting language flexibility to compiled Scala
  • Very similar to Python's Virtualenv-Pip tools, but unified into a single tool
  • (Learned the hard way)  Many of the simplicities of Python/Interpreted languages don't translate to the Scala/JVM world. The sbt documentation expects a certain amount of JVM domain knowledge which I had long forgotten.
    • Ergo: Configuration is more difficult than Virtualenv's
Going further; I hope to compare and contrast the two tool sets to gain a better understanding of both.  All of what I say about the sbt side of things is subject to immense salt and newbie understanding of the Scala/Java world.  Please respond with corrections, constructive criticism, and improvements!

If there's one area of lacking in my understanding of the Scala world, is the legacy of Java and all the paradigms of managing the JVM.  


Setup Project:

Virtualenv:  Use virtualenvwrapper to initialize environment, then create directories for project.
$ mkvirtualenv venv
(venv)$ cd projectdir
(venv)$ mkdir  projectsrc
(venv)$ touch projectsrc/__init__.py
(venv)$ echo 'print("hihi!")' > projectsrc/hihi.py
(venv)$ python -m projectsrc.hihi

sbt: In project's directory, activate sbt tool and create directory structure to match what is expected by sbt:
  • Sources in the base directory
  • Sources in src/main/scala or src/main/java
  • Tests in src/test/scala or src/test/java
  • Data files in src/main/resources or src/test/resources
  • jars in lib
$ cd projectdir
$ touch build.sbt #sbt base config
$ mkdir -p src/main/scala
$ echo 'object Hi { def main(args: Array[String]) = println("hihi!") }' > src/main/scala/hw.scala
$ sbt
$ run
Memory management voodoo from Thomas; in the home directory create an ~/.sbtconfig file and add memory management flags for execution:
SBT_OPTS="-Xms512M -Xmx1536M -Xss1M -XX:+CMSClassUnloadingEnabled -XX:MaxPermSize=256M"
Apparently sbt can start to eat up a lot of memory if left running for a long time. Don't know details, just standing on larger shoulders here.

Project Customization

Dependency Management

Virtualenv/Python: Using virtualenv pip to install all the necessary libraries makes it easy to export the versioned dependencies of a project:
(venv)$ pip freeze > projectdir/requirements.txt
When cloning a codebase, assuming the owner has been keeping the requirements file up to date, a new user can use the file to mirror install all the necessary dependencies.
(venv)$ pip install -r projectdir/requirements.txt

Furthermore properly configuring the projectdir/setup.py file ,which runs under setuptools to build installable artifacts for deployment, should also contain a manifest of required libraries.

sbt: Like seemingly most jvm systems, configuration runs dark and deep.  sbt is fairly clean but can become very powerful if the dev is knowledgeable enough.  I'm only going to cover the basic build.sbt file. Deeper documentation to use the the Build.scala files can be found here(maybe another blog post).

Simple library requirements: use Maven central repository to find the library, pull up its Artifact Details page, and in Dependency Information; copy the 'Scala SBT' definition and add it to the line separated build.sbt. eg:
libraryDependencies += "com.typesafe.slick" % "slick_2.10" % "2.0.1"
The following $compile will resolve the dependencies.
Additionally there are ways of adding dependencies via sbt's CLI; which can be found TODO:here.

 

Environment Variables

Virtualenv: Personal taste, using the virtualenv setup script postactivate for loading any environment variables.  Mileage will vary and virtualenvs allow several points of entry for customization. I prefer to tie the env-vars to the virtual env so if you want to check something in the REPL there's no requirement to be in the projects directory as if using autoenv(although it is a cool tool).
cd ~/.virtualenvs//bin
vi postactivate
Write: export POSTGRESPASS="123456"

sbt:  Figuring out how to get environment variables into sbt runtime became my White Whale..  Ultimately it simply required a deeper understanding of sbt's internals and settings management.  Along with realizing that the 'envVars' setting is only applied to runtimes where the compiled process is forked.  

Ultimately while Environment Variables are often used in Python systems for defining sensitive information or development state.  Conversely the JVM ecosystem prefers compilation or runtime configuration (arguments/flags) instead of using system definitions like environment variables which interpreted languages tend to favor.  Via the freenode #scala channel; tpolecat nicely confirmed that the general jvm practice is to specify vm runtime system property configuration via CLI arguments is common practice(I trust his opinion).

HOWEVER, if there is still a wild need to specify environment variables for runtime, sbt recently added support for it(with exceptions).  
In the declarative build.sbt file:
fork := true

envVars ++= Map{"ENVIRONMENT_DEF" -> "dev"}
Caveat/"fork := true" explanation: the "envVars" setting is only applied to VMs which have been forked from the standard sbt process.  Then envVars setting is not loaded into the sbt process and therefore can't be referenced in the 'console' REPL.

The previous build.sbt definition will map "dev" to "ENVIRONMENT_DEF" and can be referenced in a forked vm with:
System.getenv("ENVIRONMENT_DEF")

REPL

Virtualenv links to the project python binary which is configured to use all of the libraries which have been installed by the localized pip.  This can include a nicer REPL like iPython which will be scoped to virtualenv.  

sbt has a 'console' command which acts like the normal 'scala' REPL.  When used the interpreter runs under the project's configuration and defined dependencies are accessible.  One caveat mentioned earlier is that the console exists in the same vm process as sbt, so changes set for when forked will not populate to the REPL.

Packaging

Python: Once a proper setuptools setup.py definition has been created, producing the project artifact to install is simple.
python setup.py sdist
Will tar up all of the specified files into a source distribution artifact which can be installed by pip remotely on the server with fabric.

sbt:Assuming that the library dependencies are properly specified, sbt will build a jar file very simply with the 'package' command.
> package
There is also tooling similar to fabric which can handle deployment under the 'publish' operation, obviously this requires more configuration.


This is just my simple overview of how Virtualenvs and sbt compare, there's more to cover but I think these are good basics to start with.  Please comment to point out any inaccuracies or things I might have missed.  

Saturday, February 15, 2014

Docker and X: A match made with filesystems

Docker has been generating a lot of hype recently, and for good reason!  A lightweight alternative to VMs which can be version controlled and sent strait from development to production!  What's not to love?

Running a web servers are fairly strait forward applications and generally well battle tested.  Something that isn't stable and generally fought with quirks is the X desktop.  So lets look into whether Docker can support the mess that is X.

When attempting to install an X desktop in a Docker container there is an issue, Docker doesn't have perfect access to all the core system devices which X needs:
Creating fuse device...
mknod: `fuse-': Operation not permitted
makedev fuse c 10 229 root root 0660: failed
chown: cannot access `/dev/fuse': No such file or directory
dpkg: error processing fuse (--configure):
 subprocess installed post-installation script returned error exit status 1
Processing triggers for libc-bin ...
ldconfig deferred processing now taking place
Errors were encountered while processing:
 fuse
E: Sub-process /usr/bin/dpkg returned an error code (1)


This error is thrown whenever attempting to install xfce or lxde.  Consequently I did some googling to skirt the issue of installing full X on Docker and couldn't come up with much.  Using X desktops and Docker is obviously not it's main use case, so it's somewhat expected.  However I decided to search for what I was trying to accomplish, running Selenium in a Docker container; this has been done, and quite nicely.  

Vvoyer's Docker Selenium Container
The solution is to use Xvfb, which completely bypasses the need for a full graphical stack and allows selenium to run quietly in buffers.  So assuming you're confident in your selenium procedures, everything should proceed as usual.

Vincent Voyer made a nice write up of design and usage of the container which can be found on his blog: Easy-selenium-chrome-Firefox-installs-with-Docker

There are a few things I would change in regards to using Chromium instead of Chrome, but otherwise it's a solid solution and baseline for working with X in Docker containers.  Hopefully I'll be able to dig deeper into the /dev/fuse error and understand what the real problem is.