Pro OpenSolaris: A personal favorite I recommend to all

[amtap book:isbn=1430218916]

For a combination of personal and professional reasons, Pro OpenSolaris is the perfect book at the perfect time for this period in my life.

As a CTO with enterprise-grade experience I know Open Source Software and its many benefits.  And as a security professional I have long known of the powerful security features of Open Source in general and Solaris specifically.   As a computer scientist I have also been a long time personal user of Solaris (as well as Linux, Mac OS X, XP, and Vista).   But things have been moving fast in the open source community and some of the most dramatic changes have been in OpenSolaris, so it has been hard for me to keep up.   This book provides a great update of those changes and puts them in a context I needed for continual learning.

But let me tell you why I really liked this book.  It presents information on a subject I believe all software developers, programmers, project managers and CTOs really need to know, and it presents it in a way that is fast, fun reading. Harry Foxwell and Christine Tran have mastered the art of expression, and that is a rare gift for technical people to do.

But here is why you really need to read this book:  Although you can find loads of information on the Internet covering technical details of Open Source Software and especially Solaris, it can be very hard to find a comprehensive update on the new innovations in OpenSolaris.  Things like a massively scalable new data storage approach called ZFS and the significantly enhanced security over the already very secure Solaris.  Virtualization is also a key topic, as is the metrics and monitoring ability of Open Solaris (DTrace).  And, of importance to Linux and Solaris developers alike, a great overview and context of the OpenSolaris open-source based development environment is provided.

This book gives you everything you need to take a computer from its current state to one that is running OpenSolaris, either alone or as part of a virtualized system. It then provides great context and suggestions for tailoring the environment to be just the way you want it to be.

On a personal note: I've known Harry for about five years.  I first met him when I was CTO at DIA.  I found him to be one of the most pleasant, easy to interact with professionals in the business.  He also has the gift of being able to explain and teach, which is something I have always appreciated.  Those gifts come through in this book.

Let me close with another great reason to buy the book:  it will give you a great, no-nonsense understanding of what is really coming out of the Open Source software community.  All technology professionals need a better understanding of that.   Please order you copy of Pro OpenSolaris.

Connect Here

Bob Gourley

Partner at Cognitio Corp
Bob Gourley is a Co-founder and Partner at Cognitioand the founder and CTO of Crucial Point LLCand the publisher of CTOvision.com andThreatBrief.com. Bob's background is as an all source intelligence analyst and an enterprise CTO. Find him on Twitter at @BobGourley
Connect Here
About Bob Gourley

Bob Gourley is a Co-founder and Partner at Cognitio and the founder and CTO of Crucial Point LLCand the publisher of CTOvision.com and ThreatBrief.com. Bob's background is as an all source intelligence analyst and an enterprise CTO. Find him on Twitter at @BobGourley

Comments

  1. DayTrader says:

    I can't say what , but I use Open Solaris for a lot of my projects here and the main reason is because of the ZFS file system.

    When you get into data running in my server farm hitting thousands of TB and getting ready to push into the PB range the flexibility of ZFS and disk pools without having to resort to all sorts of volume managers and such is awesome.

    I also really like the fact that ZFS gets rid of a lot of holes in hardware raid controllers and silent disk errors.

    As it uses copy on write to deal with transactions and the pool size can be as big as you want , you end up with faster access as the numbers of drives grow, since most of your 'working set' of data is already under the heads and doesn't require seeks.

    With good SSD caches or front end ram based cache systems you can literally end up with only a few percent of your queries even going out to the disk farm.

  2. DayTrader says:

    The way my system is set up here is always to have large caches to minimize any disk access at all levels of the set up.

    I have multiple front end workstation level computers to access the data that I need to work with and a major size video wall to display all the data on.

    In general there are hundreds of servers running Open Solaris set up into a four tier data stage. Each level has maximum size caches to minimize their own disk drive access.

    Utilizing the snapshot capability in ZFS it is very easy to migrate data on a LRU basis from one tier to the next.

    The most current data is in the front end tier. When the drives in the pool become 90% full, stuff is moved up to the next tier, verified as transferred and then deleted after verification until the pool is 70% or less full.

    Each tier is larger than the one under it. Therefore by the theory of working sets the chance of a hit grows exponentially as you move from one tier to the next.

    Two operations go on her in parallel. The active usage servers take in the feeds and run a massive set of distributed models to look at the data and present what I want for my purposes.

    Then the last tier of data is used with heavy duty mainframe (IBM full loaded Z9 series computers) to back test the models and recommend feedback for adjusting the tuning and weighting parameters in the models to heuristically tune in on the optimal solution for the models I run.

Leave a Reply