Day 2 - early thoughts:Brrrr. It is colder this morning and the wind is biting as it whips between the city buildings. A bit slower start to the attendance this morning - very typical for a technical conference. The venue again has a nice spread for morning snacks and I am fueled up on fresh fruit. Time to learn more about file systems and storage.
Morning breakout sessions.The first session was a tough choice. There was not a standout that drew me in - most sound interesting but not immediately relevant to what I work with at the moment. The 100% FOSS Storage Array would probably be closest but it may or may. It may or may not have that much new information for me and is a Red Hat presenter which means I have other avenues to obtain the information. So I opted to attend the talk on optimizing FUSE for Cloud Storage. Diversify my experience. It was given by a representative from Parallals where they are using FUSE to interact with stored images. While mostly over my head with API information, it was still interesting to see another use of FUSE from a user and contributor.
Next up was a pair of talks on GlusterFS: Overview and Future Direction followed by Data Compliance Infrastructure.
The overview was a useful list of features recently released, in the next release, and planned for future releases. Seeing that SSL connections and encryption at rest is in the current release just means I have some work to do when I get home. I saw the options the last time I was working with RHS class materials but it is not listed in the glusterfs volume set help output or in the RHS public documentation so I thought it was still a preview option.
Another reminder from the Q&A of the overview session was:
*Ceph started as an object store and added RADOS for file access.
*GlusterFS started as a file store and extended for object store access.
Each perform different/better with different uses and each has advantages for its particular use case. It would be nice to see some documentation on which use cases benefit best with each product.
Data Compliance discussion it was pointed out that the current journaling mechanisms for GlusterFS were designed for replication (local and remote) and is not being enhanced for such topics as: crash consistent, richer on disk format, callback based, multi consumer model (lightweight, thread safe, ordering), data classification, LFU, object versions, and out of band notification.
The final push:After a long walk around Boston over lunch, I returned for a Ceph session on erasure and tiering advances. Maybe it was the time of day, but my brain was full and the absorption rates are declining exponentially each session. I did stay for the final breakout session and attended the history and future of XFS. This was an entertaining presentation from one of the lead commiters. I am glad I stayed.
The final keynotes also looked interesting and I had enjoyed other presentation from at least one of them, but I was cold and tired and decided to beat the rush hour out of town. Maybe next year. I think it is in my home stomping ground of Raleigh.
Early submitted presentation slides are available at: http://events.linuxfoundation.org/events/vault/program/slides