Posted by & filed under Developer Blog.

If you want to develop using Sublime Text (or other) on Mac but your company uses ASP.NET MVC for back-end server, this information may be helpful to get you to where you can have a shared drive in from Mac to a VM. Build using Visual Studio in the VM, but develop using the Mac.

These instructions are Parallels specific, but should be very similar for other VM hosts.

Set up a static ip address on your Virtual Machine

  1. In Windows, go to Control Panel > Network and Internet > Network and Sharing Center
  2. Click “Ethernet” under the active networks section.
  3. Click the “Properties” button.
  4. Scroll down and highlight “Internet Protocol Version 4 (TCP/IPV 4)”
  5. Click the “Properties” button
  6. Select the “Use the following IP address:” option and use the following (exactly)
    • IP Address: 10.211.55.3
    • Subnet Mask: 255.255.255.0
    • Default Gateway: 10.211.55.1
    • Preferred DNS Server: 10.211.55.1
    • Alternate DNS Server: (leave this one blank)

Add to hosts

On your Mac, edit /etc/hosts file

Add the following line, with your VM guest IP address from the previous step:

10.211.55.3 localhost

Install IIS on Windows

This has been documented a thousand places. I followed these instructions

Enable .NET

  1. Type appwiz.cpl into Run (WIN +R)
  2. On the left hand side click on the “Turn Windows features on or off” link.
  3. Now expand Internet Information Services > World Wide Web Services > Application Development Features.
  4. Place a check next to ASP.NET 4.6 (or your version) which will also check 3 other boxes.
  5. Click OK.

Install URL Rewrite 2.0

  1. Download and install the Web Platform Installer on Windows
  2. Once it instals and opens, search for URL Rewrite 2.0
  3. Click the Add button
  4. Down below, click the Install button
  5. Accept the terms

Enable Network Sharing

Windows 8:

  1. Open File Explorer
  2. Click the Network shortcut on the left column
  3. When prompted click the little yellow bar at top “Click to change”
  4. Select “Turn on network discovery and file sharing”
  5. Select the “No” option.

Windows 10:

  1. Settings > Network & Internet > Ethernet
  2. Click the blue Network Icon (Network Connected) under the Ethernet title
  3. Turn the switch to on.
  4. Restart

Configure your web server

  1. In IIS Manager, go to your Default Web Site under Sites.
  2. Click Basic Settings from the column on the right.
    • Select the DefaultAppPool.
    • Point the Physical Path your Web project folder. (something like \\Mac\Share\repo\Project.Web)
    • Click OK

Turn off Private Network firewall

  1. In Windows 8: Go to Control Panel > System and Security > Windows Firewall > Turn Windows Firewall on or off. In Windows 10: Go to Settings > Network and Internet > Ethernet (left tab) > Windows Firewall > Turn Windows Firewall on or off.
  2. Under Private network settings, choose to Turn off Windows Firewall
  3. Click OK.

Build

Build the code in Visual Studio

Test

  1. test to see if you can access the content from within the Virtual Machine
  2. If that works, try it from your Mac

Viewing Changes

  1. Any server code (C#, .NET, Razor) changes made to the app will require the DefaultAppPool to be Recycled before they will appear in your browser on the mac.
  2. Fire up IIS Manager by typing inetmgr into Run (WIN + R)
  3. Select Application Pools > DefaultAppPool
  4. Choose Recycle in the right column (note: recycle happens very fast)
  5. Refresh the browser (this usually takes a moment or two after a recycle)

Troubleshooting

  1. Build Clean Solution
  2. Build Rebuild Solution
  3. Restart the app pool
  4. Make sure your mapped drive > repo directory isn’t marked as read only
    • right click your repo directory > properties
    • if read only is marked, uncheck it and apply to all sub-folders and files.

Posted by & filed under Developer Blog.

It is obvious from the grunt-contrib-watch documentation that it is possible to get livereload working over https to avoid browser complaints such as Google Chrome’s blocked message “this content should also be loaded over HTTPS.”

What isn’t so obvious is how to go about getting it all working properly—especially to a front-end focused developer, who may not be familiar with server-side ssl, keys, and certs.

Create .key/.crt Files

For livereload to work over https, you need to provide a path to both a private key file and a certificate. These can easily be autogenerated using a CLI app called OpenSSL. There are other examples out there on how to do this with Windows, but for this example we’ll stick to Mac/Linux using Terminal. Note: in the following examples the files will be generated and saved to the current directory, so if you want it saved somewhere else, either move it after generation completes or cd (change directory) to that directory prior to executing.

A private key can be created using OpenSSL in Mac by opening terminal and using the following command:

openssl genrsa -out livereload.key 1024

The first step to getting a certificate is to create a “Certificate Signing Request” (CSR) file. This is done with:

openssl req -new -key livereload.key -out livereload.csr

Several questions will be asked, but many of them are purely optional. Fill out the minimum you care to include. These options are generally used for submitting the .csr file through a certificate verification process and using it in production. Since we won’t be submitting these or using them for production, it doesn’t really matter what you answer. For convenience, answer “localhost” for the Common Name.

Now, to finally create a self-signed certificate with the CSR, do this:

openssl x509 -req -in livereload.csr -signkey livereload.key -out livereload.crt

Now you should have 3 files generated. You really only need to hang on to 2 of them: livereload.key and livereload.crt. Copy or move those files into your project at a location accessible by your Gruntfile.js.

Configure Grunt Watch Settings

Follow the example for configuring livereload in your grunt-watch settings given in the documentation. For convenience, the example is copied here:

watch: {
  css: {
    files: '**/*.sass',
    tasks: ['sass'],
    options: {
      livereload: {
        port: 9000,
        key: grunt.file.read('path/to/livereload.key'),
        cert: grunt.file.read('path/to/livereload.crt')
        // you can pass in any other options you'd like to the https server, as listed here: http://nodejs.org/api/tls.html#tls_tls_createserver_options_secureconnectionlistener
      }
    },
  },
},

A few things to note here:

First, you will notice that we didn’t set the port to the standard 35729 used by livereload; we used 9000 instead. This is intentional. If you have multiple watches or even the Livereload app running, you can continue to do so on over http (non-secure) on the default port without conflict. This has come in handy for me on multiple occassions, so I just recommend getting into the habit of using a different port, even if you never end up needing it.

Next, livereload uses its own key and crt files. These do not need to be the same key/crt used by your app. This is why there’s no real harm including it in your project. They aren’t valid keys and won’t be used anywhere else. Although you could use your same key/crt as your app, the pros outweigh the cons for not doing so. We’ll get more into those pros and cons in a moment.

Lastly, you don’t ever explicitly tell liverealod to use https. As long as you provide both a key and a crt it will automatically do so.

Enable Livereload Listener

If you are used to using a browser plugin to inject the livereload.js listener into your page, you will probably be as frustrated as I am that these all have hard-coded http:// paths and have no option for using https://. So, we are left with our only option, include the livereload.js listener manually. To do this, you must place the following script at the bottom of your page. I generally delete this line before pushing to production, which is why those browser plugins are so handy:

<script>document.write('<script src="https://' + (location.host || 'localhost').split(':')[0] + ':9000/livereload.js"></' + 'script>')</script>

Notice that we are including both https:// and our previously set port :9000 in this script. There have been cases where I’ve needed to hard code the host instead of letting this script inject it. For example, if I’m running livereload in localhost, but my page is being served in a development hostname then I would just hard-code localhost. The important thing is that it looks for livereload.js from wherever your watch server is serving it up, not from your app per-se. Make sense?

The Tricky Part

Because you are using a “self-signed” certificate to serve up livereload over ssl, your browser will be unhappy about it. Without the steps below, you will likely start seeing strange errors in our console and livereload will still not be reloading your content as expected.

We must first tell the browser we are cool with the self-signed certificate.

To do this, simply open a new tab in your browser and browse to your livereload.js being served up by grunt-watch. For example, you will likely go to the following url:

https://localhost:9000/livereload.js

Now you can see right up front that your browser is scared to death. It’s ok, we don’t need to be. Just tell your browser you are cool with it and let it load that file “anyway.” Once you do, you will probably see the javascript contents of the livereload.js file being served to let you know your browser isn’t so scared anymore.

Head back to the browser tab you are developing from using livereload and refresh.

Note: If you are using the same key/crt as your app, you might be able to avoid this step (remember the pros/cons mentioned earlier). But chances are your livereload isn’t being served on the same host that your certificate was set up for, and the browser will complain anyway. I like my way better. Feel free to disagree.

Congratulations!

Livereload should now be working for you over ssl without being blocked and without errors.

Posted by & filed under Developer Blog.

Background

I’m in the process of setting up a new Droplet on DigitalOcean. Mine is a Ubuntu droplet. The joy about DigitalOcean is that you have full control over your (super fast) server (and at a great price). You want Apache? you can install it yourself. You want git? same deal. Trying out node.js? You get the idea. Of course with the good comes the bad. You have all the power and all the responsibility. So, when I installed ssh in order to use git without constant username and password prompts, I ran into some issues.

The Problem

Once you set up ssh to be used with ssh keys, it relies on the ssh-agent to be running to serve up those keys to other apps (like git). The problem is, once you log out of your session on the server (via ssh) the ssh-agent also goes away and no longer serves up the keys. When you log back in and do something like `git pull` you are likely to be greeted with a message saying

Permission denied (publickey).

and through a little digging, you might even come up with the error

Could not open a connection to your authentication agent.

This is because the ssh-agent process has stopped. To start it back up, you would use

ssh-agent /bin/bash

But that’s a pain to do every time you log into your server.

The Solution

The solution I chose uses a helpful app called Keychain. This should not be confused with Mac OS X Keychain, because they are not the same thing. Keychain is is a program designed to help you easily manage your SSH keys with minimal user interaction. It is implemented as a shell script which drives both ssh-agent and ssh-add. A notable feature of Keychain is that it can maintain a single ssh-agent process across multiple login sessions. This means that you only need to enter your passphrase once each time your local machine is booted.

Installation

With Ubuntu you can use apt-get to install keychain fairly painlessly

apt-get install keychain

More info on Keychain usage

Tricks

If you are like me, and you do not want to run the Keychain command or get asked for your passphrase every time you login, you may add the following to your .bashrc (or .bash_aliases if that’s how you roll):

alias ssh='eval $(/usr/bin/keychain --eval --agents ssh -Q --quiet ~/.ssh/*_rsa) && ssh'

A few things to note with the alias line above.

  • This basically checks to see if Keychain is doing it’s thing already, and if not, get it going.
  • The reason for the alias, is that it’s basically tacking itself onto the very command you would want Keychain for in the first place—in this case: ssh. This means that the first time you actually attempt to connect via ssh it will just work. No hassle.
  • Most examples of the line above will give you a concrete file path to the ssh key, but I attempted it with the wildcard (*) and it works great. If you are like me and use different keys for different services (GitHub and BitBucket for example), then this might be very useful to you. If you only use id_rsa then feel free to plug that in. The important thing to know is that you are putting the path to your keys in that spot. I’m sure you can find many more examples to suit your style on google

But Wait, There’s More

The alias above is the one you will see all over the place as a suggestion to use with Keychain, but it only works if you use ssh as your trigger. What about the original problem of using `git pull` and seeing errors? Well, it turns out you can use another similar alias

alias git='eval $(/usr/bin/keychain --eval --agents ssh -Q --quiet ~/.ssh/*_rsa) && git'

which accomplishes the same exact thing when you run your first git command after restarting your machine.

Let me know in the comments section if this helped you!

Posted by & filed under Developer Blog.

My MediaWiki extension, FancyBoxThumbs, has been completely rewritten from the ground up and has been released. This extension makes use of, and provides functionality for using the fancyBox JQuery lightbox plugin. The default behavior of MediaWiki is to take you to a dedicated image page when a thumbnail is clicked, which is less than ideal for user friendliness in most situations.

Resource Loader

This new version takes full advantage of MediaWiki’s Resource Loader which makes the loading of CSS and JS much more optimized. I did end up using a line of addInlineScript which appears to be depreciated but I haven’t been able to determine the best way around it. Resource Loader—as far as I can tell—doesn’t provide a way to write to JavaScript using PHP variables, which is what I needed to accomplish in order to allow custom fancyBox options.

Better URL rewrite

In order for fancyBox to work with MediaWiki, the URL of the thumbnail link needs to be rewritten to link to the full-size image rather than the File page. There doesn’t appear to be an easy way to accomplish this using the MediaWiki API, especially with the use of hashed image URLs being the default setting. The easiest way to accomplish this rewrite is to use JQuery to parse the existing URL and determine what the new one should be.

Previous versions of this extension did a whole lot of string parsing to deconstruct and reassemble the URL. This time I decided to split the URL into an array and manipulate each part individually. This made for much cleaner code and much more reliable parsing. The final result works great….until MediaWiki changes their URL structure 😉

fancyBox version Upgrade

Until now, the extension was using fancyBox version 1.3, but now takes full advantage of their version 2.0. The only problem with this is that fancyBox v2.0 has a different license which requires you to pay if using for commercial purposes. I have used fancyBox on many projects and find it to be a great product. If you do use this extension for commercial purposes, please consider dropping a few bucks.

fancyBox Options

Previously, I provided a few hard coded options to the fancyBox object. As mentioned above, version 2.0 allows you to set your own options in the LocalSettings.php file. To do this, simply specify your options as a JSON string in a $fbtFancyBoxOptions variable after enabling the plugin.

require_once("$IP/extensions/FancyBoxThumbs/FancyBoxThumbs.php");
$fbtFancyBoxOptions = '{"openEffect":"elastic","closeEffect":"elastic","helpers":{"title":{"type":"inside"}}}';

Repo

This extension is an open source project found at github.com/gilluminate/FancyBoxThumbs. I welcome and encourage comments and pull requests if you find things aren’t working for you or you would like to contribute in any way.

Posted by & filed under Developer Blog.

Google released an awesome tool this year as a Chrome Extension called Accessibility Developer Tools which adds a feature to the Audits panel of Chromes dev tools. With this extension enabled, you can run an audit on the accessibility compliance of the page you are viewing (either by refreshing or in its current state). This tool is very useful for quickly checking how well you are accommodating users to your site or web app who may be using assistive technology, such as screen readers, screen magnifiers, and who may not be able to use a mouse.

The other handy tool that I highly recommend is ChromeVox, which is also by Google and is also a Chrome Extension. Google released ChromeVox in 2011 as a screen reader which lives right inside your browser. The thing I like most about it being a browser extension is that it won’t start speaking while you are coding or doing other tasks, it only functions inside the browser. Also, it’s really easy to turn on and off using a keyboard shortcut.

Both of these tools were presented at Google IO in the past couple of years. I highly recommend watching both the 2011 and the 2012 presentations as they go into, not only these tools, but really good information on making your site as accessible as possible.

As the 2011 video explains, here are the 4 major steps (to be accomplished in order) to adding accessibility to your site:

Accessibility

  1. Use clean HTML
  2. Manage focus
  3. Add key handlers
  4. Add ARIA attributes

If you are new to ARIA, it’s actually not as intimidating as you might think. It basically involves designating roles to the main widgets and interactive elements on your site. Once the role has been established, each role has its own set of aria attributes that coincide with it that help screen readers know best how to describe your app to visually impaired users.

None of this is new information, but it is certainly new to me. In my current job we are building web based applications for the U.S. Department of Veteran Affairs and are required to comply with section 508 laws. It has never been more important that I pay attention to and adhere to these standards. And, now that I’m getting deeper into learning them, I am actually loving it. With these tools I mentioned above, it has made this task quite bearable.