Triage of doc bugs and how to get enough information to accurately answer and update them. This would be a discussion on how the process works, what we need to do to improve the process, and how best to engage those with the knowledge in order to do the fixes in a timely manner. We're currently at 112 bugs overall and need to get this back down to a reasonable number.
This session will start by describing the state and direction of the project and will end by a workshop on how and what to improve in the future.
More details at http://wiki.openstack.org/EfficientMetering/GrizzlySummit/StateOfMetering
This session will explain how to use ceilometer to measure interesting things in a deployed environment and discuss creating custom notification listeners and pollsters.
More details at http://wiki.openstack.org/EfficientMetering/GrizzlySummit/CustomizingCeilometer
SDKs are a vital resource for any ecosystem and SDKs for OpenStack are proliferating. We should discuss what we can do to handle this from a documentation perspective.
Some of the topics/questions to address are:
1. What SDKs are you using and why?
2. How do we track SDKs that support OpenStack?
3. Where do we track SDKs that support OpenStack?
4. What criteria do we use to allow an SDK to claim OpenStack support?
5. When documenting the SDK API at the function level, do you duplicate info from api.openstack.org or do you just link to it? Are there other options?
6. What's missing from the documentation of the SDKs you're using?
In this session we will discuss these issues and any others that participants think are relevant. We will try to reach some kind of consensus and a pick a path forward.
http://etherpad.openstack.org/sdk-documentation
Live streaming on https://openstack.webex.com/openstack/onstage/g.php?t=a&d=923124308
Over the past few months, numerous request have been made to the Ceilometer project to extend its scope from just metering to more monitoring or alerting. This causes quite a few challenges but as we are instrumenting more and more openstack component to extract data from them, it makes sense to think on how we could extend our capabilities over time. We'll review 2 proposal: adding cloudwatch and alerting functionalities.
More details at http://wiki.openstack.org/EfficientMetering/GrizzlySummit/BeyondMetering
OpenStack documents has a serials of documents for administrators and API users. All these documents need to be translated during I18N.
Unlike code, there is no "froze date" of documents. The continues development of documents brings difficulties to the translation management.
This speech introduces the process and the technologies used in the translation management during OpenStack document internationalization. It also includes a demo to create a Chinese version of manuals.
Live streaming on https://openstack.webex.com/openstack/onstage/g.php?t=a&d=923124308
Overview of the template format currently implemented in heat. Overview of the API heat provides for operating on templates. In this design session, expect 75% of time spent in open community design session on improvements to API and template model.
Session Lead will be Steven Dake.
CloudWatch is a fundamental dependency of heat that provides monitoring and alarm features. In this design session, a brief description of cloudwatch is given as well as the current heat cloudwatch architecture. Expect 75% of time spent on open community design session discussion architecture and implementation feedback.
Session lead will be Angus Salkeld.
Overview of currently resolved heat roadmap items. Overview of current community derived roadmap items. Expect 75% of time spent brainstorming future roadmap items and identifying future changes necessary for Heat.
Session lead will be Steven Dake.
This session will re-state the general idea of openstack-common, review the progress we made in Folsom and discuss general plans for the project in Grizzly.
We will discuss renaming to the project to Oslo, renaming the current repo to oslo-incubator, promoting some APIs out of incubation, a versioning scheme for library releases and a clear-out of stalled incubation efforts.
During the Folsom summit we defined some requirements for a unified command line client program for OpenStack (http://wiki.openstack.org/UnifiedCLI). Since then we have begun development and made significant progress, but the project has stalled out a bit. During this session we will discuss restarting the project and find additional contributors.
As the standard RPC call are being used more and more to convey information that may need to be auditable, some messages could need to carry additional security. Namely, some message could need to be signed and contain an incremental number incremented on each host.
If done correctly, this could allow to trace a series of messages (for example, billable events) and ensure that it has not been tempered with and that messages are not missing.
See also session http://summit.openstack.org/cfp/details/118 which is a precursor to that.
Based on a discussion we started with Eric Windisch, this effort is common with the work he stated and we will be joining our efforts in this session and hopefully in the future work.
https://blueprints.launchpad.net/nova/+spec/trusted-messaging
The services framework from nova is in the process of being moved into openstack-common. I'd like to propose the following additions to the services framework to make it more useful:
* Signal Handling
--HUP to reload configs
--TERM die as soon as threads are finished
* Command and Control over RPC:
Pause (stop consuming regular queue messages)
Resume (continue consuming regular queu messages)
Reload (reload configs and continue)
Restart (run a new copy of the daemon and terminate this one)
* Potential Command and Control:
ChangeConfigOption (does it change the on disk config or just the running one?)
StartPaused (start a new copy of the code Paused, requires running simultaneous code)
SoftTerminate (terminate the instance when threads are finished)
NOTE: The last two could be useful for the following strategy:
a) StartPaused
b) Pause (this one)
c) Resume (new one)
d) SoftTerminate(this one)
Using the message bus for messages
The RPC library in openstack-common includes an API for sending and receiving RPC calls and for sending notification messages. It does not include usable APIs for subscribing to notifications, or for sending and receiving generic messages to be consumed by multiple workers. The ceilometer and quantum projects are hijacking notifications and using a low-level API to achieve this goal, but we need to add a public API and ensure that all of the RPC drivers support the pattern. The goal of this session is to agree on the basic design for such an API to be added during Grizzly.
Along with a way to receive notifications, we should also consider adding a more generic message API that is not tied to RPC. For example, as adoption of the ceilometer project grows, we may want to include a library for creating and sending metering messages from other services. We have such a library, using RPC right now, and it is mostly decoupled from the rest of ceilometer. At DreamHost we are building a service that will use the library, which will help us discover any other issues with removing it from ceilometer. It really shouldn't need to be based on RPC, though, since the messages are one way and meant to be consumed but need no reply.
The goal of this session is to discuss whether the existing WSGI framework in openstack-common should be retained and used for future API work, or whether it makes more sense to look at some of the other Python web API frameworks and adopt something being used and maintained by the broader community. I will put together a few notes about why I chose not to use the openstack-common framework for the ceilometer API, and some pros and cons of other frameworks that we should evaluate.
XML request and response processing in OpenStack has been a hot topic recently . The OpenStack code and development community has a large bias towards JSON and there have been proposals to deprecate support for XML. As a result, XML Request/Response processing is often not available or it is incorrect. However, XML is highly desired for accelerating adoption of OpenStack within the enterprise.
This session will be an open-ended discussion on various proposals on how to improve the XML request/response processing in OpenStack. One possibility is that a framework could be leverage or developed that would enable and make it much easier for OpenStack developers to support XML in all of the OpenStack services.
This session will explore potential solutions for XML processing around determining if a reasonable framework is developed to add it to all the OpenStack services. Additionally, we would like to discuss an overall process can be defined to resolve the current issues with XML as well as address future requirements in this area.
I started a set of patches towards the end of the folsom cycle to allow the use of entry points for plugin and extension loading in nova. This was rightly rejected as a new thing that hadn't so much been discussed. However, in general I still like the idea of it, given that python does have this nice mechanism for plugin loading.
Why don't we walk through what it looks like, whether we should do it and how.
This session will primarily focus on merging rootwrap into openstack-common and further improvements to it. Time at the end of the session will be assigned to discuss incorporating keyring usage into the service infrastructure and any further service security infrastructure ideas.
This session will include the following subject(s):
Towards a unified and more featureful rootwrap:
Multiple projects (Nova, Cinder, Quantum) have adopted nova-rootwrap, so moving it to openstack-common sounds like a good idea to avoid code duplication and painful sync.
In this session we will discuss the plan to push rootwrap into openstack-common, as well as additional features for rootwrap (path searching, logging, Python code execution).
All your passwords belong to keyrings?:
Clients are starting to use python-keyring for passwords. It would seem to make sense to have other sensitive passwords also use a similar mechanism (for example in nova.conf, or in keystone.conf or in paste api ini files and so on). These places shouldn't have clear text passwords even though they do right now (eck). But I'd like to get input on what people think about that and possibly any issues they see.
Database code now exists in several projects, and much of it needs improvement. While improving the database abstraction itself is a good thing, it is difficult to do across various projects without it being moved into a common place. Additionally, there is code being sought for inclusion into openstack-common (oslo) which requires database access. For these reasons, a blueprint has been registered to move the database abstraction into common.
I will discussion my intentions, the new database library architecture, and how this will affect other blueprints such as db-threadpool and no-db-compute.
Most drivers in Cinder don't deal with local storage rather have a backend storage and an associated api. As an operator, managing multiple backends with Cinder becomes a hassle as each backend requires a single (or more for HA) instance of volume-manager.
This sessions wishes to answer the following and more:
- Who would use it?
- What are the benefits?
- What are the downsides?
- Do the drivers need to change to accomodate this?
- How do we schedule, how to choose the right backend?
- Does scheduling support for this fall under volume_types or a new volume_backend?
For performance, metrics and scaling tasks there is a strong need to have various components/code instrumented (call times, error counts, roundtrip times...). Currently ceilometer has similar data but different fundamental requirements so this session should talk about whether to augment ceilometer with this data or create a new tool. New code 'decorators' (which are not relevant to ceilometer) need to also be added to gather this information.
Many stackers are depending on OpenStack for large scale and revenue generating operations. In these environments, it is critical to monitor the health and optimize the performance of OpenStack components.
Expand on the discussion had on irc and on etherpad on use cases for volume_types and how the drivers / scheduler should handle it. Talk more about metrics the drivers need to expose to the scheduler.
http://etherpad.openstack.org/cinder-usecases
This session will include the following subject(s):
New features:
New features that are too small on their own for a slot on their own:
Secure attach - modifying the attach path to go via a cinder control node before going to the compute node, so that the complete compromise of a compute node does not result in any more volumes being exposed than already happen to be attached to that node.
Retain glance metadata for bootable volumes
https://blueprints.launchpad.net/cinder/+spec/retain-glance-metadata-for-billing
List bootable volumes - proposal for the concept of a bootable volume - purely volume created from glace, for UI easy of design
Volume backup - an API to copy a volume to object store
IOPs meeting / billing
https://blueprints.launchpad.net/cinder/+spec/volume-usage-metering
Volume resize
Volume status state machine (ClayG)
OpenStack's clients and APIs are the first point of contact for users of OpenStack clouds, and the foundation upon which tools can be built. As such, it's time we start focusing on making that a first-class consistent experience.
Simple things like providing consistently named and implemented methods, and consistent sets of features (wherever implementation allows). If I've used one core OpenStack API I should be able to expect the same from another.
Examples of these types of features include:
* Filtering a list of resources on any attribute of the resource. (e.g. GET /servers?security_group=foo)
* Ordering (timestamp, alphabetical, etc.)
* Bulk actions (GET in bulk, DELETE in bulk, etc.)
* Many more...
The goal of this session is to determine a common set of features the community/user-base needs, and make that a contract which every API and client should strive towards.
It is *not* meant to rewrite any existing APIs, or make any backwards-incompatible changes. This is only to agree on our collective future and perhaps look at the easiest initial targets.
The current volume api needs some TLC. Bringing a volume service to production has helped us identify areas where the current volume API is lacking. I would like to propose that we take a look at what can be done to whip the current volume api in shape to make a v2.0 of the api, and begin discussion of where the api should go down the road.
A short session, hopefully early in the week, where we can talk about what we accomplished this past cycle in the CI and dev infrastructure world, as well as what it is that we're planning to do already. Hopefully if we tell everyone what's already on the table before we start planning other things, it can be in the back of everyone's heads as we plan during the rest of the week.
Rackspace Cloud Block Storage is launching soon. I'd like to describe briefly what the team built, what parts Lunr and OpenStack play in the product, and then dive into some lessons learned in the design and development of a scalable commodity hardware based storage solution and interfacing with nova-volumes, cinder, and the xen storage manager. After that, if anyone is still awake, probably end the talk with a quick wrap up highlighting some of the areas we think OpenStack volumes has the biggest opportunities to improve and with the communities blessing how we can contribute!
The OpenStack Vulnerability Management Team is responsible for handling the process for handling security vulnerabilities that have been reported to the project. The purpose of this session is to discuss any potential improvements that could be made to our handling of security issues.
Team info: http://www.openstack.org/projects/openstack-security/
Process: http://wiki.openstack.org/VulnerabilityManagement
Currently we trigger a number of changes to Launchpad based on content of commit messages submitted to Gerrit: bug status changes in particular.
This has proven very useful but we can do better, in particular to drive blueprint status from commit messages. This may involve stricter rules (header-style data in commit messages ?) to be able to specify precise things like Related-Bug, Fixes-Bug, Finalizes-Blueprint, Partially-Implements-Blueprint, etc. This session will discuss if/how/what we should implement.
Time permitting, we'll look into other Gerrit improvements, like a LP-prioritized review list (think http://reviewday.ohthree.com/)
The goal of this blueprint is to implement a driver for cinder. This will allow to create volume in local storage and back up point-in-time snapshots of your data to Swift for durable recovery. This snapshots are incremental backups, meaning that only the blocks on the volume that have changed since your last snapshot will be saved.
Even though the snapshots are saved incrementally, when you delete a snapshot, only the data not needed for any other snapshot is removed. So regardless of which prior snapshots have been deleted, all active snapshots will contain all the information needed to restore the volume. In addition, the time to restore the volume is the same for all snapshots, offering the restore time of full backups with the space savings of incremental.
All in a word, our solutions is local storage + qcow2 image + dependent snapshot + swift. This is like http://wiki.cloudstack.org/display/RelOps/Local+storage+for+data+volumes, but we have incremental snapshot than cloundstack.
If you are interested in this topic, please read the full specification: http://wiki.openstack.org/LocalStorageVolume
We are migrating the translation hosting for OpenStack projects to Transifex. To make use of these translations two things need to happen.
1) Pull translations from Transifex and merge them into the git repositories for the OpenStack projects. Jenkins is currently doing this for Nova using two jobs. The first pushes updated translation template files to Transifex when changes are merged and the second pushes translation changes to Gerrit once a day. This is working for Nova but is not flexible enough for Horizon and OpenStack Manuals. Need to discuss the expectations of each project and ways to make the Jenkins Job interface common across projects (and if a common interface is not possible determine the subsets that are needed).
2) Once projects have included translations in the git tree/packages/etc the projects need to be able to consume these resources. I believe Horizon may be the only project currently capable of doing this. For example Nova assumes the translations are located in a system specific directory which is not the case if Nova has been installed from source. Need to determine how best to consume translation resources. Perhaps a module is needed in openstack-common.
Today, OpenStack has support for block-based storage and object storage, but there is no explicit support for file-based storage. For certain applications, file-based storage has significant advantages over block storage and object storage, and a large amount of existing software is designed around file-based storage. We've heard demand from the operator community for a common management interface for file-based storage, and ideally, a common management interface for storage in general. We're proposing adding management of CIFS and NFS file sharing to OpenStack, and we believe that extending Cinder is the most logical way to achieve this, both because Cinder is already in the business of managing storage devices, and because users don’t desire yet another management interface. NetApp has developed a prototype to this end. It includes extended APIs as well as an implementation of a driver backend for Cinder. We'd like to unveil our progress and discuss these with the developer and user communities to see if this approach makes sense, and achieve consensus on a path forward.
Horizon is the framework for building the OpenStack Dashboard, but anyone can use it to build whatever dashboards they like.
This session will quickly recap the Essex "Building on Horizon" basics, dive into new features like Workflows, and leave plenty of time for people who are currently working on building new components for the dashboard to ask specific questions.
We'll have a look at the current state of our communication tools and community resources: mailing-lists, IRC channels, Forums, Q&A sites, Bugtrackers, and discuss which improvements we can push during the Grizzly timeframe.
At the Folsom Design Summit we discussed the issue of reconciling the python dependencies of OpenStack projects and establishing a process for reviewing changes to our dependencies.
This session will discuss the progress on the topic since then and our plans for the Grizzly series.
A particular focus of the session will be how we can gather data from distros that will inform our decisions about proposed dependency updates.
In Folsom we got an initial quantum + horizon integration working. However, there's still a lot more we need to expose, but things that arrived late in Folsom (L3, floating IPs, provider networks) and things that will arrive in grizzly (security groups, load-balancing).
Akihiro will be organizing this session.
A working group session wherein current plans for Grizzly can be shared and anyone can suggest new features or blueprints they think should be addressed for the OpenStack Dashboard.
With the membership to OpenStack Foundation taking over Launchpad as the main source of identification for OpenStack contributors, it's time to think about an identity system for the whole project.
Work is already in progress to integrate the CLA management in OpenStack Gerrit review system with the Members' database, in order to reduce friction during the onboarding of new developers, increase reliability of our Copyright License Agreement.
Mutuated from Google's way of managing CLA, OpenStack Gerrit will comply with Foundation's bylaws and make onboarding easier.
This session will illustrate the new process and report on its status. Anybody that will be hiring developers to work on OpenStack in the future should attend this session.
OpenStack as a whole performs innumerable actions asynchronously, and in general leaves other projects in the dark as to those goings on. This affects Horizon especially, but enabling one project to learn about events in another project would be useful in myriad ways. For example:
A tenant is deleted in Keystone; instances are subsequently orphaned and left running in Nova, images are orphaned in Glance, etc. If Keystone sent out a signal when that project were deleted, a concerted cleanup action could take place throughout the entire system...
The Folsom cycle was the second cycle where we maintained a stable branch for the previous release. We will look back over the stable/essex maint efforts and identify successes and failures.
The stable-maint process still has some rough edges and so we will discuss ideas for improvements, ideally setting some specific goals for stable/folsom maintenance.
Currently OpenStack uses a painful conglomeration of queues, push, pull, poll and other forms of communication to pass around the status of what's happening in the system.
A glorious future would be one in which everything is able to communicate in real time with efficient push/pubsub notifications.
This session aims to bring together the various stakeholders (core projects, downstream distros, security experts, etc) to discuss the right solutions for the future whether they get implemented in Grizzly or not.
One of the objectives of the OpenStack Foundation is to "Make OpenStack the ubiquitous cloud operating system". In order to reach that objective the Foundation needs to understand more about the usage of the OpenStack software.
Until now we've been counting downloads from Launchpad but that source is less and less accurate because OpenStack Distributions have become the main form to test, run and deploy OpenStack. Something like the Canonical Census that sends the daily anonymous ‘I’m alive’ ping for each Ubuntu installation by OEM or Mozilla’s Telemetry that captures more sophisticated details to gain hindsights about usage of Firefox. (http://arewesnappyyet.com/)
This discussion is important for the OpenStack Foundation and should be joined by users and owners of distributions alike. The objective of the session is to identify what sort of information is needed to track OpenStack adoption and what systems we need to put in place to implement.
The emergence and disappearance of features in OpenStack (at least nova) is pretty volatile. Zones? Diablo. No Zones but Host aggregates in Essex - but only half. Tenants vs. Projects? What is it? Until when?
With the project getting more mature I think it is now the time to have a coordinated deprecation process. We should have helper methods in the code that can easily flag deprecated stuff as such so that users will be warned that this feature/way of doing things might not last.
This does not necessarily mean that we lose dev speed. The deprecated stuff should however last at least one release.
Let's discuss.
A discussion on how we'll organize the Grizzly release cycle, including release schedule, number of milestones, and the various freezes.
We'll also cover evolutions in the branching model, as well as versioning issues.