Friday, June 24, 2011
Saturday, June 18, 2011
Ilumos Panel Session - Videos
A little bit too much of unnecessary bitterness towards Solaris 11 but other than that worth watching.
1 http://www.youtube.com/watch?v=NArlja-RXVA
1 http://www.youtube.com/watch?v=NArlja-RXVA
Tuesday, June 07, 2011
Oracle Corporation on behalf of Dell, Inc
Out of curiosity I checked the Solaris 11 HCL list. One of the recently added entries is for Dell PE C1100 server and what caught my eye was:
Then there are servers submitted directly by HP (for example: DL585 G7, BL680c G7).
This is a good signal that Oracle is behind Solaris on 3rd party x86 servers - good.
Submitter Company: Oracle Corporation on behalf of Dell, Inc.There are similar entries for other Dell servers and also for IBM servers (for example: HS22, HX5).
Then there are servers submitted directly by HP (for example: DL585 G7, BL680c G7).
This is a good signal that Oracle is behind Solaris on 3rd party x86 servers - good.
Tuesday, May 24, 2011
Stopping a thread
Solaris 11 Express allows to stop/resume a single thread - really cool!
See man pages for pstop and prun.
See man pages for pstop and prun.
Monday, March 21, 2011
Wednesday, March 16, 2011
ZFS returns to MacOS?
http://info.tenscomplement.com
http://z410.tenscomplement.com
Read about it also on ZDNet.
"We're not quite there yet — but we have some exciting products in development. Our version of ZFS for Mac OS X, Z-410 Storage, started external beta testing last week. Expect more announcements soon."
http://z410.tenscomplement.com
"Our foundational release of ZFS for Mac OS X is targeted at early adopters and those who can't wait to combine the world's most innovative operating system with the world's most advanced file system.
The initial beta evaluation program is in progress (thank you to those participating). We hope to have more product details soon. Those interested in participating in our future test programs can leave their email below."
Read about it also on ZDNet.
Saturday, March 05, 2011
SCSI UNMAP in iZFS
Solaris ZFS got encryption recently and now Illumos ZFS gets SCSI UNMAP support. Are we going to end up with two diverging implementations of ZFS (ZFS proper and iZFS?) in the long term? I guess it is mostly up to Oracle now - it always has been.
Tuesday, January 18, 2011
Forced Privileges
Solaris 11 has a new feature called Forced Privileges which allows for a specific set of privileges to be assigned when a given program is run. This makes some setuid binaries more safe and they are no longer really setuid in the traditional sense. There is an excellent blog entry by Darren who explains it in a little bit more detail.
Wednesday, January 12, 2011
ZFS on Linux - GA Release
Below announcement was posted earlier today on Open Solaris mailing list:
"
Hi All,
Happy New Year !
First of all, a big thanks to you all for the tremendous response to the beta of ZFS port on Linux.It has been a continuous and arduous effort, not only from our team, but also from you all to rigorously test the beta and make the ZFS port on Linux code fit for use on production servers. Finally, after a three month effort on the testing, making changes and fixing bugs, we now come closer to the GA release.
I am glad to announce that KQ is releasing the GA code for ZFS port on Linux on January 14th. It is the confirmed date and everyone here would be glad to know that majority of the bugs reported have been fixed. The code base is on zpool 28, which means, you have access to dedup feature which wasn't available in the beta earlier !
Also, we are hosting a webinar on January 14th with an interactive conversation with our CEO answering your questions. You can find the details of the webinar on www.kqstor.com
Once again, thank you all very much for the support and patience. We have finally made it!
For any questions, feel free to contact me at darshin@kqinfotech.com
Cheers,
Darshin
"
Monday, January 03, 2011
Friday, December 03, 2010
Religion in IT
Joerg posted:
Interesting statement in a searchdatacenter article about the IDC numbers“When you sell against Dell, you sell against price. When you sell against HP, you sell against technical stuff -- the feeds and speeds. When you're up against IBM, you're not selling against boxes but against solutions or business outcomes that happen to include hardware. But, when you get to the Sun guys, it's about religion. You can't get to those guys. One guy told me last year that he would get off his Sun box when he dies."
Thursday, December 02, 2010
Linux, O_SYNC and Write Barriers
We all love Linux... sometimes it is better not to look under its hood though as you never know what you might find.
I stumbled across a very interesting discussion on a Linux kernel mailing list. It is dated August 2009 so you may have already read it.
There is a related RH bug.
I'm a little bit surprised by RH attitude in this ticket. IMHO they should have fixed it and maybe provide a tunable which would enable/disable new behavior instead of keeping the broken implementation. But at least in recent man pages they have clarified it in the Notes section of open(2):
I stumbled across a very interesting discussion on a Linux kernel mailing list. It is dated August 2009 so you may have already read it.
There is a related RH bug.
I'm a little bit surprised by RH attitude in this ticket. IMHO they should have fixed it and maybe provide a tunable which would enable/disable new behavior instead of keeping the broken implementation. But at least in recent man pages they have clarified it in the Notes section of open(2):
"POSIX provides for three different variants of synchronized I/O, corresponding to the flags O_SYNC, O_DSYNC, and O_RSYNC. Currently (2.6.31), Linux only implements O_SYNC, but glibc maps O_DSYNC and O_RSYNC to the same numerical value as O_SYNC. Most Linux file systems don't actually implement the POSIX O_SYNC semantics, which require all metadata updates of a write to be on disk on returning to userspace, but only the O_DSYNC semantics, which require only actual file data and metadata necessary to retrieve it to be on disk by the time the system call returns."Then there is another even more interesting discussion about write barriers:
"All of them fail to commit drive caches under some circumstances;and also this one:
even fsync on ext3 with barriers enabled (because it doesn't
commit a journal record if there were writes but no inode change
with data=ordered)."
"No, fsync() doesn't always flush the drive's write cache. It oftenThis is really scary. I wonder how many developers knew about it especially when coding for Linux when data safety was paramount. Sometimes it feels that some Linux developers are coding to win benchmarks and do not necessarily care about data safety, correctness and standards like POSIX. What is even worse is that some of them don't even bother to tell you about it in official documentation (at least the O_SYNC/O_DSYNC issue is documented in the man page now).
does, any I think many people are under the impression it always does, but it doesn't.
Try this code on ext3:
fd = open ("test_file", O_RDWR | O_CREAT | O_TRUNC, 0666);
while (1) {
char byte;
usleep (100000);
pwrite (fd, &byte, 1, 0);
fsync (fd);
}
It will do just over 10 write ops per second on an idle system (13 on
mine), and 1 flush op per second.
That's because ext3 fsync() only does a journal commit when the inode has changed. The inode mtime is changed by write only with 1 second granularity. Without a journal commit, there's no barrier, which translates to not flushing disk write cache.
If you add "fchmod (fd, 0644); fchmod (fd, 0664);" between the write
and fsync, you'll see at least 20 write ops and 20 flush ops per
second, and you'll here the disk seeking more. That's because the
fchmod dirties the inode, so fsync() writes the inode with a journal
commit.
It turns out even _that_ is not sufficient according to the kernel
internals. A journal commit uses an ordered request, which isn't the
same as a flush potentially, it just happens to use flush in this
instance. I'm not sure if ordered requests are actually implemented
by any drivers at the moment. If not now, they will be one day.
We could change ext3 fsync() to always do a journal commit, and depend on the non-existence of block drivers which do ordered (not flush) barrier requests. But there's lots of things wrong with that. Not least, it sucks performance for database-like applications and virtual machines, a lot due to unnecessary seeks. That way lies wrongness.
Rightness is to make fdatasync() work well, with a genuine flush (or
equivalent (see FUA), only when required, and not a mere ordered
barrier), no inode write, and to make sync_file_range()[*] offer the
fancier applications finer controls which reflect what they actually
need."
Thursday, November 25, 2010
Monday, November 15, 2010
Solaris 11 Express
It is based on build 151. The interesting thing is that you can buy a standard support for it
and it doesn't look like Oracle is treating it as beta - at least not officially.
But then why not call it Solaris 11? I guess it is partly due to marketing reasons and partly because it is not entirely ready yet and there are some components which require further work.
and it doesn't look like Oracle is treating it as beta - at least not officially.
But then why not call it Solaris 11? I guess it is partly due to marketing reasons and partly because it is not entirely ready yet and there are some components which require further work.
Monday, October 04, 2010
ZFS Encryption
It looks like ZFS crypto project has finally been integrated. It's been reported by other blogger here and the bug id: 4854202 ZFS data set encryption has been updated to reflect that it is in snv_149. Congratulations to Darren Moffat! - the guy behind the project.
It took much longer than expected. If I were to speculate I would say the integration was delayed on purpose so it gets integrated only after the public access to onnv gate was closed so 3rd parties like Nexenta could not take advantage of it before Oracle can. Hopefully once Solaris 11 is out we will see the source code as well. It also probably means that it will be in Solaris 11 Express as well.
It took much longer than expected. If I were to speculate I would say the integration was delayed on purpose so it gets integrated only after the public access to onnv gate was closed so 3rd parties like Nexenta could not take advantage of it before Oracle can. Hopefully once Solaris 11 is out we will see the source code as well. It also probably means that it will be in Solaris 11 Express as well.
Subscribe to:
Posts (Atom)