Thursday, May 30, 2013

VMWare VCenter 5.1 SSO - The worst installer out there?

I've spent the past few days attempting to install VMWare's VCenter 5.1 but the installer has other ideas, I first tried the install from build 947939 and each time I am hitting the same error:

This was after trying the simple installer which is supposed to do everything for you and instead just fails with bits of the install done and a nice big error in the install log for SSO:

    Caused by: java.sql.SQLException: Cannot open database "RSA" requested by the login. The login failed.
    Caused by: java.sql.SQLException: Login failed for user 'RSA_USER'.

Build 1123966 which was released just a few days ago fixes a few things. Finally, the installer understands what a dynamic SQL port is which is huge progress but again, Error 29114 rears it's ugly head.

Further digging into the vminst log shows this:

VMware Single Sign On-build-1123962: 05/30/13 00:44:45 ShellExecuteWait::done (wait) Res: 14
VMware Single Sign On-build-1123962: 05/30/13 00:44:45 RunSSOCommand:: error code returned is 14 while launching C:\Program Files\VMware\Infrastructure\SSOServer\utils\rsautil.cmd
VMware Single Sign On-build-1123962: 05/30/13 00:44:45 Posting error message 29114

According to VMWare the fix is listed in this article and that fix is as follows:

This issue is occurs when the SSO database RSA (default SSO DB name) does not meet the prerequisites for the SSO installation and the RSA DB table space RSA_INDEX has a filegroup type of PRIMARY, instead of RSA_INDEX.

To resolve this issue, change the filegroup for the RSA_INDEX table space to RSA_INDEX using SQL Management Studio and then install vCenter SSO.

Contact your Database Administrator before making changes to the database.
Small problem though:

Yep, the filegroups on my SQL server for the RSA database are already set correctly.

There are a few suggestions out on the web that also suggest changing part of the SQL script to create the index in an MDF rather than an NDF but this never worked for me.

At the time of writing I'm still bashing my head against this issue and getting more annoyed at the terrible installer which not only is broken but compounds the breakage but putting information into several different log files.
VMWare SSO has to be the most fiddly, broken installer that is required for an application that I've ever come across. It seems obvious that VMWare understand that SSO has it's share of issues as 5.1U1a came hot on the heels of 5.1U1 but the installer still leaves a lot to be desired, still breaks in the majority of situations and still is a nightmare.

Wednesday, January 09, 2013

20 little rules for surviving in IT Projects

Below are just a few things I've come across in almost 19 years of working in IT. After a chat with a good friend who is considering breaking into It I thought it would be worth jotting them down.

1. Take time to document and keep a copy of the document. Make sure you document issues you had, that part will be invaluable in the future. When you document something make a note of software versions. You'll often be surprised at differences in 'point' versions. These differences can be the key between something working and something going majorly wrong.

2. Never be the person who says 'yeah, I remember seeing that error once' Document it. You'll need it some point. A well structured wiki or notes system can cut  future task time in half simply by recording known issues and fixes.

3. Never lie to clients be they internal or external. You don't have to tell them everything but a direct answer to a direct question, admitting when you are wrong and/or don't know something will win you points for honesty, I also call it being professional.

4. Going the extra mile is good, often that extra mile isn't staying until silly o'clock but providing some hand holding during a major piece of work or just some notes on why you are changing the config of something. Communication is vital.

5. Saying "its urgent" is often code speak for "I didn't plan and am giving you this problem". If you already have documentation on how to fix something flagged as urgent and you have the time then go for it, this can be worth major brownie points.

6. Sometimes something really is urgent. A major trick is knowing the difference between 'its urgent' because it really is and 'its urgent' because they didn't plan and need it now. This comes with experience and listening to people.

7. Weekend work, whilst unpleasant, can be made easier with planning and pre-staging. All weekend work should have a back out plan/rollback plan. Never be afraid to call for a rollback. You may not be popular with project mangers but that's preferable to digging yourself into a hole you can't get out of.

8. If you lack documentation for something (and at some point you will) then make your own. It will have gaps but you'll have a starting point to build on. See point 1

9. No matter who takes the minutes in a meeting make your own. they don't have to be anything official but a few notes often go a long way when, 6 months down the line, when someone asks about a point from that meeting/that client you'll have notes.
Something as simple as a notebook that never leaves your side and has name, date, attendees, client and actions as well as notes, diagrams, etc will be a major asset when you need it.

10. Prioritise. Most managers cant do it for you because they don't have the tech skills to know how something works, if they demand something be done asap, they are the boss so do as you are told.

11. Organise your email. As an IT person email is often the key way people communicate and where all the alerts go. Organise it with rules and alerts to keep on top of what is important and what is noise. Don't be afraid to consign those management prep talk emails to noise, its better to find a system problem than learn the latest internal buzz.

12. It's  rare to find someone in IT who will share knowledge. If they are happy to share knowledge make the time to learn as its always good to be able to take a step back and see the big picture of something. Also, if you can talk to other people, not just it people, using their own terms not only will they will be more willing to listen to you but they are more likely to help you. Be ready to share your own knowledge. It people who hoard knowledge are often scared of being caught out. If you are caught out you've just learnt a valuable lesson, accept it with good grace.

13. Listen. Be it to customers (internal and external), other people, meetings or to a point, gossip. Gossip can and will tell you more about what is going on in a company than regular status updates and from that, you can infer the likely impact to systems under your control. Gossip is sometimes the best measure of where to put your energies than anything else!

14. When you do a project for a non-IT person, other company, users, etc they are trusting you with one of the most precious things they have - their data.
Don't let them down. Listen to them, understand what it is they do with their data and explain to them, in plain English - with diagrams if required - just what it is you will do with it and why it'll be better. Over time this will gain you both trust and a reputation for being professional.

15. Any backup is only as good as the restore procedure. Any restore procedure that has not been tested in the last 12 months should be considered as just words in a document that has no basis in reality. This goes double if software/hardware has been upgraded.
Even if it's up to date, validate it. Its best to verify and understand the gaps before getting to the point where you need the restore and finding the gaps then or that a key step is missing.

16. Project plans are often just big task lists with no meaning to the dates. Understand the real deadlines in a project and the reason for those deadlines. When project managers start pushing you'll know the real score - this is better when you understand the customer and their job - see point 14.

17. At the end of a project do your own lessons learnt. Too often lessons are not learnt from lessons learnt. Your own version should highlight how to deal with certain blocks you encountered in a project especially if they were blocks put up by colleagues, it'll help with future project work.

18. Avoid the following names: old, new, temp. They will bite you and if a project is halted mid work for some reason when you go back to it which areas are valid? the old ones? New ones? Temp ones? Meaningful names will save headaches in future.
If you are saddled with someone else's poor naming convention then document it - see point 2!

19. If possible, test changes to live data before doing the change on live data. If its not possible make and test a backup. Saying "it'll be fine" is the IT version of "look at me!" Often with the bone (data) shattering crunch at the end.

20. When you asked to estimate time for a piece of work don't estimate time for just that piece of work. Add in things like documentation, internal processes, liaison with others, interruptions and then double it.

Friday, January 04, 2013

Test labs and internet explorer patches

Test Labs

Test labs seem to be somewhat in focus at the moment with my workplace deciding to dedicate a rack to a test environment that'll hold a Netapp cluster, cisco switch, brocade switch and a couple of servers running an ESX cluster.

With ESX added to the mix this is sufficient to allow for rapid deployment of servers which will allow testing for a huge variety of installations and more importantly allow for re-validation of existing solutions where those solutions have upgrades.

A classic example of existing solution problems is something I'll touch on later in the year but, in brief, it seems that the latest version (6.0.4) of Netapps 'Single Mailbox Recovery' product doesn't work with certain versions of Snap Manager for Exchange. A downgrade to 6.0.3 is required and it is because of very silly incompatibilities/version issues like this it's vital that every now and then existing solutions are revalidated with the latest OS versions, patches, software and various combinations thereof!

Personally, I am a huge fan of the HP Microserver and with the £100 cashback offer it's a good choice for a simple personal test lab. My own testlab consists  of a couple of microservers with some additional network cards and running VMWares ESX 5.0, a FreeNAS box running FreeNAS 8.3-P1 with 5 x 2Tb hard drives and one running Windows 2008 with Starwind as Starwind is free and provides a nice iSCSI target to play with.

In my opinion, having a test lab whether at work or at home is vital to do procedural and system testing  and projects should always have the time built in to allow for testing where software needs to upgraded or where completely new solutions have been suggested as it's always nice to know about and document the know issues before running into them on a production environment.

Personally, I like having my own test lab at home as many times I've seen work testlabs that are a total mess with a mixture of production and test systems as well as stuff that no one knows about but are too scared to delete - just in case. All too often, I've found that a home test lab can turn something around faster than at work which makes a work from home day quite a useful experience and, of course, it can be used for upgrading skills or as a refresher prior to an interview.

Internet Explorer Patches

The first patch Tuesday of 2013 will be here in a few days and this one is going to be fun with no less than 12 security holes being patched except for a zero day security hole found on Christmas day and currently being exploited. All versions except version 9 of IE are affected so it seems that the general consensus is that an upgrade to IE9 is a wiser move.

Of course, at this point a lot of people will make the same cry 'Don't use Internet Explorer' which is fine until you consider that A. some software REQUIRES it and B. Browsers like FireFox have addons that use Internet Explorer components to render patches that require Internet Explorer thus making the computer vulnerable anyway.

Monday, December 03, 2012

The four constraints of a Project

Today I came across an interesting blog article on Wayne Hales blog. It's about a comment made by Admiral Gehman (Head of the CAIB) regarding Space Shuttle Management at NASA. I've copied the key part below:

The program manager essentially has four areas to trade. The first one is money. Obviously, he can go get more money if he falls behind schedule. If he runs into technical difficulties or something goes wrong, he can go ask for more money. The second one is quantity.  The third one is performance margin. If you are in trouble with your program, and it isn’t working, you shave the performance. You shave the safety margin. You shave the margins.  The fourth one is time. If you are out of money, and you’re running into technical problems, or you need more time to solve a margin problem, you spread the program out, take more time. These are the four things that a program manager has. If you are a program manager for the shuttle, the option of quantity is eliminated. There are only four shuttles. You’re not going to buy any more. What you got is what you got. If money is being held constant, which it is—they’re on a fixed budget, and I’ll get into that later—then if you run into some kind of problem with your program, you can only trade time and margin. If somebody is making you stick to a rigid time schedule, then you’ve only got one thing left, and that’s margin. By margin, I mean either redundancy—making something 1.5 times stronger than it needs to be instead of 1.7 times stronger than it needs to be—or testing it twice instead of five times. That’s what I mean by margin.

What has this got to do with IT projects?

Well, a lot actually. In IT projects we are always up against the same sort of constraints and in IT projects there are the same four areas that can be traded:

  • Time
    • The timeframe for the project, including testing and documentation.
  • Money
    • The budget for the project including overtime.
  • Quality
    • The actual 'fit for purpose' result of the project.
  • Scope
    • What needs to be achieved within the time frame and with the money provided.

All four areas are linked and a change in one has an impact on the other three, for example, Reducing the time but keeping the scope the same will impact the money and quality side of things whereas increasing the quality either increases the time (due to the additional validation tests) or reduces the scope (to ensure that what is delivered is as accurate as possible).

Now, on most IT projects the two things that are set in stone is the budget and the time frame.
All too often the time frame is arbitrary and so not related to any valid reason for having said time frame. The delivery deadline is generally picked and handed down simply because a senior manager wants to see something by date x and these dates are normally tied to budget cycles rather than to the complexity of the work.

On the flip side the budget is often set at the start of the project and long before the software and hardware requirements have been thought through. This does leads to cases where additional licences are required and yet cannot be purchased because the budget has been set and so workarounds have to be developed and this affects the quality and time side of the project.

Many projects are presented that have the time (deadline) and budget set.
In many the scope as well has been pre-determined, even more often the scope will grow during the project.
Managers/clients will want to shoehorn things into the project that were originally not in the scope.
This is normally seen by management as a way of saving money due to them adding more to the scope but not increasing the time or budget allocated.
Even more often, during the project it is found that a dependency on a system not in scope is affected by a system that is in scope and so suddenly the not-in-scope system has to be added to the scope.

This means that the only thing that we, as project staff, IT admins, consultants, etc have any actual control over is the quality and the only control we have is to reduce the quality of the work in order to meet the demands of the time, budget and scope.

Even more often the time factor becomes such a pressing issue that overtime is demanded/pushed to such an extent that quality is automatically sacrificed on the altar of  'getting it done' and, of course, as I said above, you can't change one thing in isolation without a knock on effect of the other three. The quality decreasing will have an impact on the time as things need to be fixed, the budget as more staff/more overtime is needed and the scope as things are sometimes dropped in order to meet the deadline again.

Absolutely none of this is news and I suspect everyone who reads this and works on some sort of IT project is nodding theirs heads because they have been there yet why do we, as IT professionals keep getting caught in the same trap? It's not even as if anything I've written here is new - Tony Collins wrote a book in 1999 which goes into some detail about several major IT disasters. 1999 was almost 14 years ago so why do we still have the same mistakes?

It is long past time that IT became a traditional engineering discipline and that engineers working on projects spoke up when a scope, schedule or budget was obviously written in fantasy land. Let's start doing projects once and doing them right.

Tuesday, September 18, 2012

Windows 8 Preview

I will have to admit that it was with some trepidation that I installed Windows 8 onto an old laptop that I have had lurking around for a few years. This laptop is what I call my 'scarificial laptop' in that it can be formatted and have an OS dumped on it with no loss of data. It's purely there for specific testing that needs a piece of hardware.

At first, I thought i would be clever and install it over the network via the windows deployment server that has served me very well recently but no joy with that as the deployment server doesn't support WIM files from Windows 8 or Windows 2012. Nope, for that you'll need the newer Windows deployment toolkit (which was in beta when I first penned this) and is probably part of Windows 2012 as a feature.

I have not had the opportunity to test Windows 2012 to see if it's version of the Windows Deployment Server will be able to handle Windows 8 but it's certainly on the 'to test' list as it will also provide the first 2012 server in my environment with a decent task. I didn't try any third party tools so cannot say if they will be suitable for deploying Windows 8.

So, after that false start I opted to go the old fashioned route and burn a dvd and install from that which worked perfectly. The install took around an hour but this was from dvd to an old laptop which whilst it has 4GB of RAM has a rather old processor.

Once Windows 8 was installed it connected to the 2003 domain flawlessly and before long I had a fully working windows 8 box with the much discussed metro/not metro front end.

There has been a lot of negative talk around metro and after using it both to see what it had to offer and 'in anger' and I didn't find it too bad. Searching for apps is fairly intuitive  certainly it would have been better if my laptop was one of the newer touch screen ones it may have been even better thanks to the oversized icons but overall it was usable and if you are used to keyboard shortcuts such as Ctrl-Esc then you'll have no issues with Windows 8.

 A lot of the issues with Metro seem to be around this absurd notion that its only possible to run one application at a time. Now any 'proper' metro app that is run does take the full screen and there is no way of shrinking it much like an iPad type application but also like an ipad application metro apps can be loaded at one time and one of those metro apps is 'desktop' which provides a windows 7 style desktop complete with task bar but not start button and the normal ctrl-esc key combination to activate the start button will instead bring up the metro screen. Typing in this search does a basic search for an application based on whatever name the shortcut has.

During testing of windows 8 most of applications worked first time so I was able to use it run rdp sessions, putty sessions, web browser sessions via firefox as the built in, metro enabled internet explorer had major issues connecting to another other than bing so web management of things like avocent kvms and netapp filers was right out.

The big issue I had with the metro front end was with just how many applications required a Skydrive account, for example, I clicked on mail hoping for an outlook express type configuration wizard to pop up but no joy, I just got this:

The pictures library was also a huge dissapointment, I was hoping that I could point it to a URL or UNC path and let it read the images in there but nope, it requires them to be in the local pictures library:

The weather application worked perfectly once I told it the location and I have to admit that I found it a very neat and powerful application, a real shame the others didn't work the same way:

The other thing that gave me a shock was the task manager - it's certainly changed!

Overall, I do like Windows 8. I do think it needs to be a bit more network aware especially for things like the picture library and it should also allow for things like the mail icon on the metro front end to be replaced with Outlook or something else otherwise it's just wasted screen estate.

Monday, July 02, 2012

Getting the most out of Netapps Filer Simulator

As part of my home Test lab I have an older pc running linux that hosts several copies of netapps filersim v7.3.6 - this is purely for functional and svenario style testing as the filersim is very slow and doesn't provide much disk space.

At this point, people familiar with the filersim will probably ask why I dont use the latest version, the vmware image of 8.x or 8.1 and there are a couple if reasons:

Firstly, i like having the filersim on a different physical piece of kit, it much more emulates a real environment where the filer sits separately from the VMWare environment its serving.

Secondly, 8.1 does away with filer view so to manage the filers means either command line or fire up another VM with system manager on it. Now, I do have a VM with System Manager but yet again its another application, another set of configuration to provide nothing useful except the latest OnTap version which is fine for when I have a need to test something against that version but most of the time I just need basic Net App storage to test out tool functionality.

Over the years of using it I have streamlined a process to give me as much disk space as possible and a fairly rapid deployment.

The process i use is as follows:

1. Copy the iso file to /tmp

2. Create a mount point with mkdir /mnt/iso

3. Mount it with mount -t -o loop /tmp/[iso name] /mnt/iso

4. Cd /mnt/iso

5. Run the setup wizard
Install to /filer[number]
Cd /filer[number]
Vi ./

6. You can have a max of 56 disks per install.
A cluster counts as one install so 28 disks each.
change maxdisks from 28 to 56 if you only want a single filer (and you can have several single filers on the same hardware allowing for snapmirror/snapvault tests with the extra disk space).

Change default disk type from a (100mb) to f (1000mb)

Configuration parameters in - click for a better image

7. Launch the filer via ./
Give it an ip address
Go to a web page and enter http://[filerip]/api
fill out the wizard go to http://[filerip]/na_admin

8. Logon and create a new aggregate.

9. Go to secure admin ->ssh -> generate keys -> enable ssh

10. Launch putty and ssh to filer
At the cli:
Vol create vol0_copy aggr1 5g 
Vol copy start vol0 vol0_copy 
Vol options vol0_copy root 

There will be some warnings about mailboxes, ignore them.

type in halt at the cli.

11. Putty to the linux box
Cd /filer[number]/,disks
Remove the two or three 100mb disks
Rm ,reservations
Cd /filer[number]

Allow setup to create as many disks as it needs to reach 56 but don't go over 56.

12. Launch the filer via ./
Add new disks to aggregate
Rename vol0_copy to vol0 and shrink it. 300mb should be more than enough

And that's how you get max disk space out of the ontap 7.3.x simulator- This is roughly what you'll have:

Resulting Aggregate view - click for a better image

Monday, February 27, 2012

Looking at home Storage Systems - FreeNAS

As I mentioned in a previous entry I wasn't exactly impressed with open filer and I have since found out that many of the openfiler issues are down to a particually nasty bug.

As at the time of writing i have no idea if there is an official fix for the bug or if it is working 'as designed' and don't much care because freenas has become a firm favourite of mine.

Freenas not only just works but it has obviously been designed with storage in mind as it works fine on small scale systems and just as fine on larger systems.

My own setup of freenas currently consists a hp micro server and 4 x 2tb hard drives with freenas booting from a verbatim 8gb USB stick.
 The USB stick allows for the full content of the hard drives to be available for storage rather than having to have the os installed onto part of the hard drive.
Boot times for freenas 8 are less than a minute. With 4 x 2TB hard drives I'm seeing a little over 7TB in a RAID 5 array.

In terms of protocol support freenas offers all the good stuff such as CIFS, NFS and iSCSI although I have yet to get iSCSI working for me but that probably has more to do with being used to iscsi in a NetApp environment which in terms of setup hides a lot of the process in tools like snap drive and web utilities like OnCommand.

FreeNas Protocol support is quite extensive

Freenas also allows for some older but still useful protocols like TFTP and it allows for CIFS shares to be added to NFS exports which, for admin reasons, can be very handy for those of us more used to windows clients!

Because one of the options is to set up a ZFS partition Freenas also allows for all the niceites that are in ZFS such as snapshots.

This technology allows for an area to have a snapshot taking before having a potentially dangerous operation enacted on it.

One area that does let freenas down though is a lack of media support. Whilst its absolutely fine for streaming movies and music direct from the drive it will not replace media servers like plex although there is nothing wrong with presenting plex as the front end with FreeNas as the back end and the more I use FreeNAS the more I get the feeling that it really is an ideal back end storage platform where more expensive storage really doesn't fit or is just too expensive to be considered.

For that reason it really is perfect for us folk with our own home networks and who might need a bit more storage for scenario testing, photo manipulation, NFS or iSCSI areas for other servers, etc but it will also work quite happily in a corporate environment.
Now, I am not suggesting it replace more high end San or NAS solutions but it still works well as the aforementioned dumping ground for other, and I hesitate to the use the phrase 'less important', data but it'll certainly cope with things like veeam backups and as general second/third tier storage. Personally I will be sticking with FreeNAS as a home storage solution for the foreseeable future.