Wednesday, November 09, 2005


Im going to post about something else in the next blog so that you dont think all I do is complain about stuff..

Ive done some pretty neat things with Solaris Resource Manager, Extended Accounting, perl, perl DBI/DBD, Oracle and Crystal Reports that ill tell you more about soon!

Thoughts regarding consolidation..

One thing ive been giving alot of thought is consolidation of Software..

The discussion comes from alot of different perspectives, from Managers regarding Cost, from architects regarding making the landscape easier to understand (less servers means less dependecies, right?) and maybe some other perspectives..

We have tried this consolidation thing.. It works great for some applications/services.. Some stuff that we have been highly successful with are WebSphere and Oracle.. We have some machines that run up to 30-40 separate installations of WebSphere in the same OS, not using zones or any other mechanism, but by using common sense regarding distribution of ports for each appserver.. Oracle is the same thing, up to 10 different oracle installation with different patchlevel's and different usage servering MANY different applications DB needs.. and it works GREAT! Our problem is the OTHER applications..

  • The applications that has weird licensing model's that makes the price skyrocket just because we move the application from a 4-way box to a 16-way 6900, but with NO extra service added..
  • The Other applications that uses a shitload of port's, undocumented and the unknown services they provide and they are most likely not configurable..
  • The Internal System Owner's that want's their OWN box just for the heck of it
First out: Outdated Licensing Models..

We have one quite famous Business Intelligence application that is quite famous throughout the world.. Its divided into 2 parts, one part does all the smart stuff, connecting to different datasources etc and the other parts is a frontend where people can read the generated reports etc.. Anyhow.. This application's licence is bound to the number of Cores available ASWELL as how many MHZ they are running at.. I find this incredibly annoying..

Why should the software vendor bother at all about how much Hardware we throw at a problem? If we are just going to generate one report that 1 person is going to read but we want this report generated LIGHTINGLY fast why shouldnt we be able to throw a 25k at the problem without running into the licensing issue? Should the software vendor really care if we are stupid enough to spend a gazillion dollars to make their software run faster? Does the vendor's costs increase if we run the application on a 25k instead of a V240? No not at all..

Many people have blogged about this issue but I still don't really see any difference at many SW vendors yet.. Please do SOMETHING! Base the licensing on something else.. Number of reports generated, the number of users with account's in the application etc etc.. Look at JES or Utility Computing or any other model, Im sure that many organizations would welcome such a move and make manager's more happy with the application and embrace it even further rather than keeping it at a distance just because of bad licensing..

Ill write more about the other 2 annoying parts in later blogs..

Take care ppl!

Monday, May 02, 2005


I just came back from 2 weeks in the US, first a couple of days in Washington DC and then off to California and Nevada.

SuperG was good, not great but good! The best ones were McDougall talking about iScsi vs NFS, Glenn Fawcett talking about scaling oracle on highend boxes, Steven Johnsson talking about Storage Subsystems and performance within that, the BoF sessions regarding Zones and SRM were interesting aswell but the presentation about Zones were so boring.. Everyone kept explaining what a zone is.. Doesnt everyone know what a zone is by now? Sun has been preaching about them for 2 years by now..

Ill write some more thoughts lateron after a dentist appointment..

Thursday, March 03, 2005

Cutting Costs by "Semi-Outsourcing"

The place im working at is into some serious cost-cutting (what business isnt). Recently the people way above my head had a meeting with some account managers from Sun. I can imagine the meeting going something like this:

Sales rep: We can help you achieve better ROI on spent $$$$'s by letting us do more for you but you will still pay the same amount for your Service. Let us help you keeping your Solaris more available and more secure by handling all your OS updates.

Manager: That sounds like a good idea, Id like more $$$$'s to spend on our yearly Golf-tournament.

Sales rep: Thats what we thought, do you want some complimentary golf-balls?

So the meeting trickles down to us the senior admins and we start to groan. My opinion is this:

Keeping 500 unix boxes up-to-date with the latest patches is NO problem whatsoever, aslong as you dont have to think about applications. Ive seen several different patching tools that logs into machines, checks all patches and installs newer ones if needed and then makes a reboot and the machines is back online in no-time. (JASS, N1 *cough*, flar's etc etc). Sounds great on paper and when Sales Rep's talks to Management people. But in the end it comes down to the following 2 problems:

Applications and Service-windows.

Ofcourse all you other people out there work for THE company that has full control over their SLA's and all have their little service-windows every month when its ok to take down the servers and apply OS and security patches, and all your system owners scream of joy when you say you want to apply your quarterly OS patches.

Why cant management understand that Solaris is easy, applications are hard, you cant apply patches and go home and expect all applications to work as intended when you come back in the morning. After applying patches you need to make sure that the application works as intended. What takes time when it comes to OS/Security patches the troble is getting access to the system, getting some allowed downtime and then making sure everything works as intended after the patches have been applied, GETTING THE PATCHES ON THE SYSTEM IS EASY.

What makes management think that Sun's technicians would do a cheaper/faster/better job at patching our systems? When the problem lies within our own organisation? Hire us some secretaries to keep the paperwork away from us and that can handle booking of downtime and test-personell (or help with some automatic testing tools), and we will make sure that patching goes smoothly from thereon and everafter!!

Wednesday, March 02, 2005

Solaris Zones, just how seperated are they anyways?

So I'm looking into BSM Auditing on Solaris 9 and 10. Anyone noticed that there is something missing in Sun's great offering? Well yes, a nice way of collecting the audit logs would have been nice.

Some might say, Solaris 10 supports syslog! Which is a great improvement, until you realise that syslog truncates each entry at 1024 characters (which isnt THAT long if you have looked at the audit logs).

So what to do if you want to collect logs from several hundred of servers in various security zones? Sun suggested that with Solaris 10 you can run your services in a Zone and the auditing will then take place in the global zone and then you can make a few scripts to send your logs to a logserver using scp, thats a pretty good approach but at the place where I work we dont really like having to open for ssh to a logserver.. What other ideas can we come up with?

One could be, Solaris 10 on a host with a few network interfaces and a zone for each interface connected to each DMZ running a NFS-server, each server logs to the nfs-share and rotates the log every 5 mins. On the global Zone a cronscript checks with fuser if each file is being held by the auditd and if not decides to move it of the NFS-share into the global zone, the files cannot be deleted without breaking into the global zone of the logserver which shouldnt be accessible from the different DMZ's. Sounds pretty neat in my ears and quite cost effective without having to resort to buying a StorEdge 5310 with Compliance Archiving software for quite a couple of thousand dollars..

So Im quite happy wow what a nice idea.. I make a quick phonecall to a my Sun Contact and explains what Im thinking and he responds, Nice idea.... On paper..... Only problem is that you cant have seperate NFS-Servers running in different zones.. and then my mood just went south from there.. which brings me back to my headline..

How seperate are Zones anyways? How dependent are they upon eachother, they share kernel, for some reason they arent seperate enough to run nfs-servers in them, you cant have different system clocks in them, they share Shared Memory.. How much can we trust zones, if Sun themselves cant make their own protocols and services run within a Zone (they released the NFS-specs to the opensource world in -84 apparently), how likely is it that all 3rd Party vendors will succeed in writing Zone compliant software?

Dont get me wrong, I think Zones is a great leap forward but is it "separate" enough, I dont have enough competence to judge if it is or isnt, but from my point of view, perhaps they arent in the current form, maybe they will be in Solaris 11?