Monday, June 13, 2016

Have you used Chocolatey?

If you've not come across Chocolatey then it's certainly worth a look. Those of you who have used Linux will be familiar with yum and/or apt-get, well, this is the Windows version of that software.

As a Windows admin for some years, I've used tools such as Nlite and Ninte to create custom builds and automated installs. I've also used Windows GPO's to install software or to make software available in the Add/Remove programs list but nothing quite compares to the ease of which chocolatey allows software to be installed.

The way it all works will be very familiar to Linux admins. Chocolatey uses a repository where all the install files live and then a very simple command will allow for the necessary software to be installed as long as you have an internet connection to the repository.

It's also possible to setup an internal repository so that you can install both your own software and third party software from a trusted internal resource as there is always a risk that someone has uploaded a malicious installer to Chocolatey.

Details about the process for using Chocolatey are here and I do encourage you to give it a go. If you use automated/unattended installations then using Chocolatey to install applications not only makes sure that you've got the latest versions but that you also have a relatively simple upgrade method.

Some have asked why I'm so interested in this sort of technology and it's simply because I've had a bit of a revelation of late. That revelation is around automation and devops.
I suspect that most IT folk have unattended scripts for installing Windows and I also suspect that many have a few scripts floating around that they reinvent when the can't find the original.
Devops is changing all of that, there are hundreds of tools that are there to simplify all of this and it's my firm believe that tools like Chocolatey are part of a huge cultural change coming to IT. Check it out, it's going to be the future.


Tuesday, May 03, 2016

Exploring Windows 2016 TP5 - HyperV

Windows 2016, TP5 just came out and, after largely ignoring the previews, this one is looking rather good so I thought it was time to give it a bit of attention starting with Hyper-V.

Windows 2016 has some impressive improvements to Hyper-V, in fact, some of them look like they may well give VMWare a bit of a run in so it'll be interesting to see how things stack up once 2016 has had time to be deployed to a few datacenters.

My first test for any new Windows based OS is to test out the WIM file deployment through WDS. With Windows 2016 TP5, this worked perfectly and it even allowed me to use the same unattend file I created for Windows 2012 R2

My first test in HyperV was to migrate a machine from Windows 2012 R2 Hyper-V to 2016 TP5, as this is going to be something a lot of IT departments will look at first, after all, if you can't get your VM's into 2016 then the take up will almost certainly be slower.

I was a bit surprised when the move didn't work, it generated an error saying I never had permissions which was a bit strange.


Out of randomness, I added the 2016 TP5 server to the MMC on the 2012 R2 server and tried the move again and it worked. It seems that moving from 2012 R2 to 2016 is fine as long as the move is run from the 2012 R2 MMC which strikes me as a little strange and certainly something worth watching out for when the full version is released.
A move from one 2016 server to another went without error.

Aside from that little bit of strangeness, the move from 2012 R2 to 2016 worked pretty flawlessly.

Upgrading a VM's configuration from 2012 R2 (Version 5) to 2016 (version 7.1) is straight forward and takes no time at all but like VMWare, it has to be done whilst the machine is powered off.



Along with the version 7.1 VM you also get something new, production checkpoints. These are actually snapshots that are Microsoft approved to be used in a production environment. Microsoft don't say how long a VM can be used with checkpoints. Personally, I'd still avoid them being used for more than a few days as that will cause slowdowns for large VM's.




One other improvement that has been long overdue is the ability to add a vnic to a live VM. This is something that has been in VMWare for years and yet was strangely absent from Hyper-V until now.

After using Hyper-V in 2016 for a few hours, I'm impressed with the changes, many of which are long overdue to bring Hyper-V onto a level playing field with VMWare. The MMC for it though is still many times more clunky that that offered by VCentre.






Wednesday, April 20, 2016

Just how good is IISCrypto?

I've played around with IIS Crypto a fair bit, for those who don't know it, it's a freeware application that can make changes to the registry to restrict the protocols that are used by IIS in order to secure it and avoid the SSL sites being affected by vulnerabilities such as poodle, drown and so on.

 but I wondered just how good an application is it?

Well, after some testing, it's pretty damn good.

I had some free credit at Azure so I decided to spin up an IIS VM and run a the QualSys SSL testing against it to see how it fairs out of the box.

The test I ran was a very simple test, I spun a Windows 2012 R2 box up at Azure, installed IIS and connected to it over it's IP address to do basic validation. I then set up a DNS pointer to it and grabbed an SSL cert from StartCom and once all configured, the default IIS page was available over SSL.

So, out of the box, with SSL enabled, how does IIS fare according to Qualsys?

Actually, not too badly:



It rated a "C" with the server being vulnerable to Poodle.


Running the IIS Crypto tool and selecting "Best Practices" removed a whole list of ciphers and protocols. A reboot was required which was slightly annoying but as this changes the registry it's understandable.

A quick test on Qualsys again and we get a nice 'A' rating:


An 'A' is good but it's not an 'A+' which I'd have liked to have seen, unfortunately, I didn't have the time to do any further testing but a quick google and I did see an article from Scott Helme about adding in the "strict transport security" header into IIS which I'd liked to have tried but wasn't able to. I suspect that Scott is spot on here and this will get a coveted A+ from Qualsys.

This is all very straight forward and simple for a single IIS site but if you have multiple sites on the same server then you're going to need to test each and every site as multiple sites are a key requirement to make a server vulnerable to drown, add in allowing SSL 2 or 3, RC4 as a cipher and still using SHA-1 certs and you get........


Avoid the 'F' grading! Look after your protocols no matter where the web server is hosted.

Thursday, July 09, 2015

David Cameron wants to Ban Encryption - 2

You can see my original blog article here.

In a follow up speech David Cameron reiterated his desire to break encrpytion on the UK's IT systems and in a show of the "special relationship" between the UK and the US, America joined in on decrying encryption as the root cause of all terrorism.

On Wednesday FBI director James Comey briefed both the Senate Judiciary Committee and the Senate Intelligence Committee about the problems encryption is causing the FBI and others at stopping ISIL, drug dealers, pedophiles, and other unsavory types. He described the situation as communications "going dark" for law enforcement. 
Comey said that there were firms that provided encrypted end-to-end communications and yet still had the ability to read such messages as they travel through their servers, but declined repeatedly to say who these companies were or how their systems worked.
Source - The Register

At the risk of over speculating on what the eventual plans will be, I really cannot see either country outright banning encryption or even suggesting weakening it. They know that Encryption is what drives eCommerce which is vital to both countries financial standing and it drives things like "Digital Britain" which, in turn, allows for more access to local councils and for more things to be done online reducing the costs of providing those services at a traditional over the counter environment.

It seems to be highly likely that the UK and the US will push forward with a plan to have some sort of encryption Master key and they'll somehow require companies to register that key with them. In essence, they'll be building a snoopers database which will become one of the most important hacking targets ever created. Should the database be compromised then the Government will have just given the keys of the digital kingdom to every terrorist out there.

Imagine what IS or any other disgruntled group could do if they can intercept commercial traffic?
Imagine the chaos they could be caused if hackers got hold of it?
And it won't just be bedroom hackers or terrorists, it will be government trained hackers of foreign nations who want to get hold of commercial data. Companies would be ruined overnight if this goes ahead.

Even if banks are excluded from this master key database it won't matter because crackable encryption will mean that things like credit card data will still be readable by people who have access to those master keys.

And that brings me to my final point, what about systems outside of the UK and the US? The cloud allows me to spin up servers in Asia and other places. If those places have don't have similar laws to the UK and US then people are free to set up strong encryption and not hand over the master keys, business will almost certainly be invited to move to places like Singapore, Dubai and China with promises of being able to conduct business securely.

Ironically, the UK Government sponsors a site called "cyberstreetwise" which have a page at https://www.cyberstreetwise.com/cyberessentials/ which offers a nice little badge should a company pass a questionnaire. One of the things in that questionnaire is related to using encryption to ensure secure digital communication.

All my questions to cyberstreetwise and to the Conservatives have gone unanswered which is funny as cyberstreetwise promotes itself with the line "please do ask us anything you need to know" and then ignores any difficult questions about the very Government that bank rolls it.

This proposed ban is going to be a very silly affair, many places have highlighted just how impractical this is. My own favourite can be read here.

Friday, July 03, 2015

How to do DNS correctly

Time and again people seem to be doing DNS outside of best practice rules so I thought it might be a good time to go through how DNS works, what DNS best practice is (with regards to a windows environment) and why it's like that.

In a nutshell, the most common mistake I see with DNS configuration is this:



This configuration is put in place for one of two reasons:

1. It is there to resolve external addresses should the internal fail.
2. It's there to provide internet access should the internal fail.

Point 1 is the most common that I come across and it's very wrong because that is not how DNS works.
When a name query is run, DNS will ask the first name server to handle it, if that name server replies and says "I don't know what that name is that you've sent me", that's it. DNS will not ask the second DNS server because it has a valid reply. Yes, negative replies are valid replies. They are even cached locally for a period of time. All of this is covered in RFC 2308.

Point 2 appears to make some sense, if the internal DNS server dies then queries to it fail but, hey, at least people can still get on the internet - right?
Well yes but.... Every now and then that first DNS server is going to be too busy to reply so the client will ask the second DNS server. If the query is for an internal resource then the second DNS server won't know about it and suddenly you've got this weird condition where a client appears to be refusing to ask the internal DNS server and nothing internally is being resolved, again this is due to the cached negative responses covered in point 1.

Best practice is always to have your clients use internal DNS servers and it's always best practice to have two internal DNS servers.

The second big configuration error that I see is people using internal name servers in the forwarders. This is utterly pointless as the forwarder is there to handle queries that your internal DNS servers cannot. So, internally if you ping www.google.com your internal DNS servers won't know what that is so will pass it on to the forwarder.
If your forwarders are just internal servers then the query will either take a long time to complete (i.e. until it gets out of the network) or it'll just fail.

In summary, Internal DNS server IP addresses for clients, forwarders on the DNS servers for everything else. Stick to that and DNS shouldn't ever be a problem.

Tuesday, March 31, 2015

My thoughts on handling a system outage

If its going to happen then it'll happen at the worst possible time. It'll happen that friday evening just before going home/beer o'clock and it'll be a long weekend plus whatever system dies a death will be the very system that you have logged on to exactly once, several months ago and that was in error.
In short, its going to be the one system that you know nothing about and the one that normally just works.

And then the phones will ring with people demanding action because it just so happens that the boss type wants something from that system before he leaves for the day and no it cannot wait because it has to be right now.

 So what do you do in these situations?

 Believe it or not, the answer is "nothing" - at least at first.

 No matter the issue, no matter how many people are telling you to get it fixed now the very worst thing you can do is try things out 'to see if it works'.
 You might get lucky but you probably won't and by trying things out at random you will turn what is probably a simple thing into an epic hunt to track down what it was you changed just to get the system back to how it was before you 'just tried something'.

 Any system that breaks needs to be treated like a crime scene, there is evidence there of what caused it to break. This evidence needs to be collected, something as simple as a reboot may well fix the problem but it may not and in rebooting you could lose that evidence and it may well be the very clue that is needed to stop the problem from happening again, possibly on multiple systems. So what to do?

At this point its very easy to bow to pressure and try something, anything to get it working and get out of there for the weekend but this potentially puts you into the above category where you'll be fighting to get it back to a known, broken state!

Firstly, preserve the evidence. If the box has blue screened, take a screenshot via drac or ilo or on your phone and only then reboot it.

Once it has rebooted grab a copy of the dump file, there are some excellent online tools that will analyse the dump files for you.

 If the box hasn't blue-screened then try and grab a copy of the state of the machine - what services are running?
What applications are running?
What is its ip configuration?
How busy is it?

Secondly, preserve the logs.
Take copies of the system and application event logs.
If the application has its own logs then copy those.
Ideally, all logs should already be sent to a syslog server, of course this is fine for linux but what about windows? Again, there are agents for windows that will perform this task admirably.

So, now you've got some basic evidence, what next?
This all depends on the system but how people access it, for example, is it ok from inside the company but broken from outside? If so, the server is fine but you may have a connectivity, firewall or load balancer issue.
If its broken from inside and out then its probably the server.
No matter the issue, basic connectivity tests are a good place to start.
Can the server contact its default gateway?
Can the server contact a server in another vlan?
Can the server contact the internet?
Googles dns servers at 8.8.8.8 and 8.8.4.4 are wonderful for connectivity tests!
Can the server resolve names? Nslookup is the best tool here.

I'll expand more on this in a later article but suffice to say, the 7 layer osi model can be a handy reference for troubleshooting. Working 'down' the model from application to physical is a good, methodical way to troubleshoot.

 In summary, both logs and the state of the machine represent the digital fingerprint of the issue. Its important to preserve them. It shouldn't take more than a few minutes to gather it, you need to make sure that you keep it together.

One other important thing to note is that once you've found the problem and if it is a bluescreen or crash due to a bug, driver, patch,etc its important to check out other systems that could be vulnerable to the same issue as this potentially will save you or a colleague from another nightmare friday night troubleshooting scenario.

Wednesday, January 28, 2015

David Cameron wants to ban encrypted messages

A couple of weeks back, David Cameron gave the following quote:
"If I am prime minister I will make sure that it is a comprehensive piece of legislation that does not allow terrorists safe space to communicate with each other"
And on the surface of things, that seems fair. After all, right now, the Police and Intelligence services can issue a warrant and get access to postal mail and phone calls and who wants terrorists to be able to plot in secret?

but......

Unfortunately, David Cameron's quote goes further and specifically targets internet encryption:
"In our country, do we want to allow a means of communication between people which even in extremis, with a signed warrant from the home secretary personally, that we cannot read? “Up until now, governments have said: ‘No, we must not'."
This is a very frightening statement because it shows a lack of understanding of how the internet works.

For example, right now, as I type this I am on a secure connection, an HTTPS/SSL connection to blogger. Everything I type here is being encrypted and sent over the wire to where the blogger server is. This connection is encrypted thanks to mathematics. There is no trusted third party. There is no Royal Mail or British Telecom that holds a master key to access everything.

Now, lets say that David Cameron gets his way and, within the UK, there is a law that says all encrypted messages should be readable by the security services. Just how will this work when some services are outside of the UK? OK, America will play ball but what about if I put a server in Asia? This sort of thing is very easy to do with the abundance of cloud computing resources.

Also, what actually constitutes a "message"? If I connect up a secure, encrypted VPN to my work place and send out an email is that a message? Would that break Camerons' snoopers charter law?

What if my bank sends me a notification about a new product for my account when I logon to the their secure site? Is that a message?

To provide the security services with a "skeleton key of encryption" would require putting a genie back in the bottle that was released when Phil Zimmerman published PGP. To undo that now is impossible and, in many ways pointless. 

Why pointless?

Well, if encrypted communications are banned/have a master key why wouldn't terrorists send images and use Stenography to hide it? Why wouldn't they spin up a cloud based server and leave messages directly on that server to each other? No encryption needed and if a pre-arranged code is used it would look just like random text, much like the Bayesian rule avoiding text you so often see at the bottom of spam email. Such a message could even be embedded IN a spam email.

And if there is a master key to all encryption, what would happen if it got out? And, of course, one day it would even if no one released it. It would become the single biggest hacker focus simply because of the bounty of information they'd get access to with ONE encryption key.
Suddenly every single secure communication in the UK (except Governments probably) can be read by someone with access to that key. All that data, just there for the taking.

This suggestion from David Cameron is beyond ludicrous, shows a serious misunderstanding of how encryption works and misses out on the fact that the internet is a global phenomenon.

I suspect it's just a feel good sound bite but it should show that the Conservatives have no grasp on e-commence and nor do they have any experts that they have spoken to before hand.
This either makes them totally inept or very dangerous. Only time will tell.