Loading…
Whether you want to build the software, run it, grow the community or just learn more about it, there will be content, workshops and design sessions for you to attend at the OpenStack Summit, Oct 15-18 in San Diego. Stick around Friday for the first OpenStack service day, a 1/2 day beach cleanup.

Register now! openstacksummitfall2012.eventbrite.com
Design Summit [clear filter]
Monday, October 15
 

9:50am PDT

Closing quantum/nova gaps
With Folsom, there were a couple key gaps around Quantum/Nova integration

- no equivalent of multi_host for quantum
- metadata service did not support overlapping IPs
- nova security groups couldn't handle overlapping IPs (this is being handled by security groups in Quantum, but we still need someone to do the work for iptables based security groups).
- nova CLI commands for security groups + floating IPs do not proxy to quantum.
- define best model for handling case where nova VM is spun up with no NICs.
- tools to migrate existing nova configs to quantum

With this session, we should also come to a conclusion on what network functionality will continue to live in nova for grizzly.


Monday October 15, 2012 9:50am - 10:30am PDT
Windsor BC

9:50am PDT

Docs Bug Triage and How to Get Answers

Triage of doc bugs and how to get enough information to accurately answer and update them. This would be a discussion on how the process works, what we need to do to improve the process, and how best to engage those with the knowledge in order to do the fixes in a timely manner. We're currently at 112 bugs overall and need to get this back down to a reasonable number.


Monday October 15, 2012 9:50am - 10:30am PDT
Emma C

9:50am PDT

OpenStack Swift intro for devs
Coming into a new codebase, especially a large one, can be daunting. In this session, we will go over the basic structure of the swift codebase, examine how things are put together in the code, and how to effectively get patches into swift.


Monday October 15, 2012 9:50am - 10:30am PDT
Annie AB

9:50am PDT

Real-time node monitoring
Would like to explore how we can monitor the status of nodes and display real time info about power usage, thermal state, correctable/uncorrectable errors. How do we integrate into Nova things like Intel's Node Manager (a system for monitoring/controlling machines) and anything that AMD provides for a similar service.


Monday October 15, 2012 9:50am - 10:30am PDT
Emma AB

9:50am PDT

State of Metering

This session will start by describing the state and direction of the project and will end by a workshop on how and what to improve in the future.

More details at http://wiki.openstack.org/EfficientMetering/GrizzlySummit/StateOfMetering


Monday October 15, 2012 9:50am - 10:30am PDT
Maggie

11:00am PDT

backportable database migrations
Backporting fixes and features to stable branches that require migrations is currently not well supported. This session would discuss ideas we could implement to better support database migrations for stable branch maintenance and backports. Potential Topics include:

-idempotent migrations
-reserving migration numbers for stable branch maintenance?
-would switching to alembic help (vs. sqlalchemy-migrate)
-patterns to follow when backporting things


Monday October 15, 2012 11:00am - 11:40am PDT
Emma AB

11:00am PDT

Customizing Ceilometer to Measure Your Cloud

This session will explain how to use ceilometer to measure interesting things in a deployed environment and discuss creating custom notification listeners and pollsters.

More details at http://wiki.openstack.org/EfficientMetering/GrizzlySummit/CustomizingCeilometer


Monday October 15, 2012 11:00am - 11:40am PDT
Maggie

11:00am PDT

Inter-cloud object storage: colony
This session will be a progress report of an inter-cloud object storage project called colony which is based on swift. Inter-cloud object storage is a common storage which can be accessed from multiple clouds. It should be deployed geographically distributed for availability and good performance. Colony can be viewed as a network-aware version of swift, because it needs to perform without huge performance degradation even with high latency network connections.

The discussion points:

1. How to make Swift network-aware
In the PUT operation, all replicas are written in the same site as the proxy server, instead of writing them to the location the ring specifies.
The replications to the original positions are done by the object replicator asynchronously. After confirming the replication, the local copies corresponding to the replicas are deleted. In the GET operation, the ‘nearest’ replica is chosen by the mechanism described in the next section, instead of randomly chosen. The proxy server works with the cache mechanism as well.
2. How to measure network distance
We use the zone information in the ring for the network distance mesurement. The zone information is fixed decimal numbers and we can allocate them freely. Therefore, we can assign these decimal numbers to specify actual locations. We also add two config items (ring_zone_site_number and near_distance) to proxy server for detecting which account/container/object servers are nearby. When difference between zone information and ring_zone_site_number is within near_distance value, proxy server treats this account/container/object servers are located in the same sites. Let’s say the nodes in data center #1 are from zone-100 to zone-199, the nodes in data center #2 are from zone-200 to zone-299, and so on. If proxy server #1 has 100 as a ring_zone_site_number and 100 as a near_distance, object put through proxy #1 is stored in data center #1 at first, and replicated to proper location asynchronously, By using this sort of convention, the software can know the network distance without our modifying the ring structure or the code related to it.
3. Prototyping
We introduce a prototyped a network-aware OpenStack Swift based on swift-1.4.8. The modifications were made to the following code.
- Swift/proxy/server.py: proxy server code
- Swift/common/ring/ring.py: query process to the ring
4. Evaluations
We describe the evaluation result of the prototype in the following environment.
Two sites (Tokyo and Sapporo) are connected through high speed network called SINET-4. Both sites have proxy-server and storage servers, which consisted of account-servers, container-servers, and object-servers. There are three zones at each site.



Monday October 15, 2012 11:00am - 11:40am PDT
Annie AB

11:00am PDT

Nova/Quantum vif-plugging Interaction
Note: looking for someone to lead this session (may be moved to a split session)

There are several aspects of vif-plugging that could be improved.

Some thoughts are here: https://bugs.launchpad.net/quantum/+bug/1046766

Other topics:
- generic vif-plugging mechanism to report data back to Quantum.
- moving all vif-plugging drivers into Nova (should have been done a long time ago).
- auto-setting vif-plugging based on response from quantum
- supporting multiple vif-plugging types at same time

Note: garyk will be leading this session.


Monday October 15, 2012 11:00am - 11:40am PDT
Windsor BC

11:00am PDT

SDK Documentation Discussion

SDKs are a vital resource for any ecosystem and SDKs for OpenStack are proliferating. We should discuss what we can do to handle this from a documentation perspective.

Some of the topics/questions to address are:

1. What SDKs are you using and why?
2. How do we track SDKs that support OpenStack?
3. Where do we track SDKs that support OpenStack?
4. What criteria do we use to allow an SDK to claim OpenStack support?
5. When documenting the SDK API at the function level, do you duplicate info from api.openstack.org or do you just link to it? Are there other options?
6. What's missing from the documentation of the SDKs you're using?

In this session we will discuss these issues and any others that participants think are relevant. We will try to reach some kind of consensus and a pick a path forward.

http://etherpad.openstack.org/sdk-documentation

Live streaming on https://openstack.webex.com/openstack/onstage/g.php?t=a&d=923124308


Monday October 15, 2012 11:00am - 11:40am PDT
Emma C

11:50am PDT

Beyond Metering - Extending Ceilometer

Over the past few months, numerous request have been made to the Ceilometer project to extend its scope from just metering to more monitoring or alerting. This causes quite a few challenges but as we are instrumenting more and more openstack component to extract data from them, it makes sense to think on how we could extend our capabilities over time. We'll review 2 proposal: adding cloudwatch and alerting functionalities.

More details at http://wiki.openstack.org/EfficientMetering/GrizzlySummit/BeyondMetering


Monday October 15, 2012 11:50am - 12:30pm PDT
Maggie

11:50am PDT

Nova Bug Handling
Nova has quite a high volume of bug reports. We have been having a hard time keeping up with both triage and addressing legitimate issues that have been confirmed. Let's use this session to discuss things that we could do different over the next release cycle to improve this situation.


Monday October 15, 2012 11:50am - 12:30pm PDT
Emma AB

11:50am PDT

OpenStack Documentation and Translation Management

OpenStack documents has a serials of documents for administrators and API users. All these documents need to be translated during I18N.
Unlike code, there is no "froze date" of documents. The continues development of documents brings difficulties to the translation management.
This speech introduces the process and the technologies used in the translation management during OpenStack document internationalization. It also includes a demo to create a Chinese version of manuals.

Live streaming on https://openstack.webex.com/openstack/onstage/g.php?t=a&d=923124308


Monday October 15, 2012 11:50am - 12:30pm PDT
Emma C

11:50am PDT

Quantum API improvements
Note: we will also discuss improvements to the internal DB model layer to better handle extensions.

This session should cover the following topis:
1) Promotion of extensions into core, in particular the L3, and provider networks extensions.
2) Approval of a plan for deprecating attributes and replacing them without breaking backward compatibility
3) Decision on renaming tenant_id to project_id in order to align with Keystone v3
4) Formally describing and documenting the Quantum API through schemas, and assessment of the potential for code and documentation generation.
5) Other API improvements, such as pagination and link to resources in responses.
6) Improvements to the Quantum client library and CLI


Monday October 15, 2012 11:50am - 12:30pm PDT
Windsor BC

11:50am PDT

Swift: Solving Geographically Distributed Clusters
This session will include the following subject(s):

Geographic replication:

Discussion about how to make Swift work well when distributed among multiple datacenters. Swift is currently architected to run on a homogeneous, high-bandwidth, low-latency network. What happens when you throw low-bandwidth, high-latency links into the mix, and how can it be fixed?


Monday October 15, 2012 11:50am - 12:30pm PDT
Annie AB

1:50pm PDT

Does Swift belong in Openstack Core?
Now that I have your attention...

This journey began when I began thinking about some pain points with Swift in general. One of the issues being that there is a view that Swift should be the point of abstraction for other storage systems to be used. This has lead to some code that I feel doesn't belong in Swift proper. Other's have taken the approach of creating a compatibility layer on top of their storage system. I don't think either of these are sustainable long term.

Since the beginning of Openstack, there has not been a clear definition of what Openstack CORE means. In this talk I would like to present a clear definition of what Openstack CORE should be, and how that applies to Swift, and possibly other projects.

I will also present one option of how Swift could be split apart into two projects. One that would be part of openstack core that would provide the APIs and required bits to be a point of abstraction for object storage in Openstack with pluggable back ends. The other would pull the actual implementation of Swift into a separate project that would include a driver for Openstack object storage.


Monday October 15, 2012 1:50pm - 2:30pm PDT
Annie AB

1:50pm PDT

Heat Orchestration API draft design session

Overview of the template format currently implemented in heat. Overview of the API heat provides for operating on templates. In this design session, expect 75% of time spent in open community design session on improvements to API and template model.

Session Lead will be Steven Dake.


Monday October 15, 2012 1:50pm - 2:30pm PDT
Maggie

1:50pm PDT

Incremental Imaging
Whether and how to add the concept of incremental images (think redirect-on-write partitions for snapshots) to Glance, and how to create them in Nova.

This feature could enable faster snapshots and reduced storage requirements. But there are some hairy details regarding parent image deletion and where the image chain is reassembled before it can be booted.


Monday October 15, 2012 1:50pm - 2:30pm PDT
Emma C

1:50pm PDT

Quantum python client and CLI improvements
The current Quantum python client is a very thin wrapper over the API protocol. In Grizzly, we should look improve the python client by making the python library more pythonic and easier to consume by developers. (Note: These changes will fit nicely Ian's proposed session about client lib documentation). In addition to the python library, we will also discuss the CLI and how to improve consistency across the command set.

Python Library Discussion Points:

- The python bindings are a light of a wrapper and leak too much of the wire protocol.
For example:
client.list_ports() should return a list instead having to do not client.list_ports()['ports']
client.create_port(dict(port=my_port_attributes))

- Adding the specific named params to the client library code. Update the method heirarchy of the client lib to improve code reuse of basic CRUD operations. The interfaces could possibly be created by automatic generation tools. Specific parameter names will help developers understand how the library works.

- Discuss an optional interface to the client bindings that returns proxy objects (ala SQLSoup).

CLI Discussion Points:

- Inconsistent option names

- The parsing frequently breaks on extended attributes if you forget the '--' which is inconsistently used. We should be able to make Cliff parse unknown flags correctly (per Cliff author's Doug H).

- Data type conversions. The CLI should be able to automatically coerce args types vs having to specify type flags such as bool|int.

- Embedded dicts as command line args make the CLI harder to use.

- Better error handling for unknown arguments.




Monday October 15, 2012 1:50pm - 2:30pm PDT
Windsor BC

1:50pm PDT

service group api
Currently nova compute nodes periodically write to the database (every 10 seconds by default) to report their liveness. The Servce Group API factors out this functionality and make it a set of abstract internal APIs with a pluggable backend implementation. We'll discuss the traditional DB based backend and the ZooKeeper backend, and whether this is more suitable to openstack-common so that other projects may adopt them too, and the related blocking issues.


Monday October 15, 2012 1:50pm - 2:30pm PDT
Emma AB

2:40pm PDT

Implementing CloudWatch for OpenStack

CloudWatch is a fundamental dependency of heat that provides monitoring and alarm features. In this design session, a brief description of cloudwatch is given as well as the current heat cloudwatch architecture. Expect 75% of time spent on open community design session discussion architecture and implementation feedback.

Session lead will be Angus Salkeld.


Monday October 15, 2012 2:40pm - 3:20pm PDT
Maggie

2:40pm PDT

Make PostgreSQL a first class citizen
PostgreSQL is a popular open source database and while OpenStack has basic support for it there are some rough edges. Lets talk about things we could do to make this better including:

-connection/pooling concerns
-any features we may need (Nova query logging support for example)
-testing?
-gating?



Monday October 15, 2012 2:40pm - 3:20pm PDT
Emma AB

2:40pm PDT

Multiple Image Locations
We need to explore the benefits of tracking multiple copies of the same raw image data under a single entry in Glance. This session is intended to present some use cases for stashing image data in multiple places and offer some solutions.


Monday October 15, 2012 2:40pm - 3:20pm PDT
Emma C

2:40pm PDT

quantum + tempest w/gating
Top priority for the Quantum team in Grizzly is getting good integration with tempest and initiating gating with quantum tests

Note: Nachi will be leading this session.


Monday October 15, 2012 2:40pm - 3:20pm PDT
Windsor BC

2:40pm PDT

Swift Feature List Ideas Walkthrough
This session will include the following subject(s):

Feature list ideas walk-through:

Let's walk down the list of feature ideas for swift (see http://wiki.openstack.org/SwiftOct1Meeting for a good start on the list) and discuss who can work on different items.

Swift Support for ARM:

In the last couple of releases we made great progress with Nova, Glance, and Keystone running on ARM. We have to keep this progress up, but now we have to focus on Swift as well.


Monday October 15, 2012 2:40pm - 3:20pm PDT
Annie AB

3:40pm PDT

Heat Roadmap past and future

Overview of currently resolved heat roadmap items. Overview of current community derived roadmap items. Expect 75% of time spent brainstorming future roadmap items and identifying future changes necessary for Heat.

Session lead will be Steven Dake.


Monday October 15, 2012 3:40pm - 4:20pm PDT
Maggie

3:40pm PDT

Image Workers
Whether and how to add asynchronous worker processing to Glance.

Asynchronous processing in Glance could be very useful for handling certain tasks:
- periodic janitorial tasks
- copying in image data when an image is registered with a Copy-From header
- (later) image format conversion


Monday October 15, 2012 3:40pm - 4:20pm PDT
Emma C

3:40pm PDT

More pluggable nova
The nova project should be broken up further. cinder + common was a good first step, let's go further.

I think it would be a good idea to have seperate Modules (which will be represented by own git repos, python or distro packages...) for things that are mutually exlusive (e.g. when I use the hyper-v hypervisor I will not be using Xen on the same node, so these should be different modules).

I can see the following things to be broken out:
- nova objectstore
- nova certs
- nova keys
- nova hypervisor xen
- nova hypervisor kvm
- nova hypervisor vmware
- nova hypervisor hyper-v
- nova networking "classic"
- nova networking quantum
- nova cloudpipe
- nova compute
- nova scheduler filter
- nova scheduler etc.

This will also lead to the need to define the APIs between the modules better which I think is a plus (e.g. what exactly will the hypervisor API look like? Which method calls with what kind of params etc.)

One of the objections might be "but all the dependencies". Yes. This is a challenge, but this should be addressed as well. Sane pip-require files should do that.


Monday October 15, 2012 3:40pm - 4:20pm PDT
Emma AB

3:40pm PDT

On-disk data encryption in Swift
This session will include the following subject(s):

On-disk data encryption:

There are several proposals of on-disk encryption for Swift. We want to elaborate on one of them (http://www.mirantis.com/blog/openstack-swift-encryption-architecture/) and start an open discussion on this topic.


Monday October 15, 2012 3:40pm - 4:20pm PDT
Annie AB

3:40pm PDT

Quantum Scheduler / Quantum + XenServer
note: split session.

This session will include the following subject(s):

Quantum Scheduler:

Scheduler support for quantum

- HA support
- Nova scheduler integration
- Agent monitoring
- Agent resource management



Quantum and XenAPI / XenServer:

There seem to be lots of issues getting the Folsom version of Quantum working with XenServer.

Let's look at making the Open vSwitch Driver + DHCP working well, working well with DevStack, and getting it tested.


Monday October 15, 2012 3:40pm - 4:20pm PDT
Windsor BC

4:30pm PDT

Modular Quantum L2 Plugin and Agent
Beyond support for GRE networks, there is no meaningful server-side
difference between the current openvswitch and linuxbridge
plugins. Both support VLAN tenant networks and the same set of
provider network types. Their agents also are similarly structured,
differing mainly in the specific networking commands they run in
response to the same set of events. We will propose replacing these
two plugins in Grizzly with a modular Quantum plugin and agent where
both L2 network types and the mechanisms supporting those types
plug-in as driver, allowing multiple networking technologies to be simultaneously deployed within a Quantum installation.

Network types such as VLAN, GRE, and flat will be implemented in the
server-side plugin as network-type-drivers. These network-type-drivers
are responsible for maintaining any type-specific DB schema, pooling
and allocating tenant networks, and validating parameters for provider
networks. But a network-type-driver is not tied to any specific
mechanism for realizing that network type on participating compute or
L3 agent nodes.

The current Open vSwitch and Linux bridging mechanisms would plug into
the modular L2 agent as agent-mechanism-drivers. Nodes using different
agent-mechanism-drivers for a common network type could co-exist and
interoperate within a Quantum deployment. Multiple
agent-mechanism-drivers could also plug into a node's L2 agent
simultaneously to handle different network types. We will also explore
server-mechanism-drivers that would integrate certain centralized or
controller-based mechanisms on the server-side, without the need for
L2 agents on the participating nodes.

This modular L2 plugin and agent architecture is not intended to
replace the current Quantum plugin abstraction. Non-modular plugins
are more appropriate when an external controller completely controls
the data center network. We will discuss the benefits this modular
architecture offers in cases where a variety of networking
technologies co-exist within a data center. In particular, we believe
this approach is more flexible and more maintainable than the
meta-plugin approaches currently used in Folsom.



Monday October 15, 2012 4:30pm - 5:10pm PDT
Windsor BC

4:30pm PDT

Securing Swift's internals
The current security model for Swift is one of an external, untrusted network where clients live and an internal, fully-trusted network that the cluster internals use. The proxy server is the only piece that straddles the two.

If someone gains access to the internal network, they have complete control over the entire Swift cluster; they can modify any stored data in any way they want, and maybe even execute arbitrary code on the Swift machines.

In this session, we'll discuss what can be done to make Swift more robust to intrusions on its internal network.


Monday October 15, 2012 4:30pm - 5:10pm PDT
Annie AB

4:30pm PDT

Splitting out EC2 API from Nova
The PPB (Now TC) decided in May that 3rd Party APIs should not be in core. [https://lists.launchpad.net/openstack/msg11916.html]

This session will break down how to pull out the EC2 API into a separate project. With the RPC API being versioned it should be easier to track internal AP changes.


Monday October 15, 2012 4:30pm - 5:10pm PDT
Emma AB

5:20pm PDT

Automatic Swift ring construction and maintenance
This session will include the following subject(s):

Automatic ring construction and maintenance:

The ring is the heart of swift, but correctly populating it, and keeping it up to date is currently left for manual intervention.
This session is intended to discuss approaches to automating initial ring construction, as well as consideration in maintaining the ring.
Initial construction maps available disks to zones in the ring. Disk removal and addition trigger swift activity try to achieve the desired replica count and weight balance across the cluster. This activity can lead to replication storms, impacting user experience. During the session we’ll discuss best practices and strategies to minimize these impacts



Monday October 15, 2012 5:20pm - 6:00pm PDT
Annie AB

5:20pm PDT

Compute Cells for Grizzly
The session at Folsom was only 30 minutes and turned out to be too short. Given that there seems to be even more interest in Cells based on ML queries and the fact that Cells didn't make it into Folsom, it seems that it's worthwhile to do another talk.

I plan to explain the architecture, discuss what works, what doesn't work, and where we are with getting the code into master.


Monday October 15, 2012 5:20pm - 6:00pm PDT
Emma AB

5:20pm PDT

quantum-heat-integration
Extending Quantum functionality with Heat requirements

Design session introducing heat requirements for Quantum including a
further discussion of specific requirements and implementation planning.

Our planning, yet incomplete but being mapped out is here. We will be
ready to roll by summit:

ttps://github.com/heat-api/heat/wiki/Roadmap-Feature:-VPC-Quantum-mapping

Note: this session will likely not take the entire slot, so we may slot an additional item into this session "on-demand" (aka "sessions-as-a-service).



Monday October 15, 2012 5:20pm - 6:00pm PDT
Windsor BC
 
Tuesday, October 16
 

11:00am PDT

Bringing OpenStack QA into the Open
OpenStack development is done on github in a shared and open manner. In the QA realm, Tempest code is managed in the same way, but each provider/deployer seems to have their own set of acceptance tests behind closed doors. It is hard to see what advantage this situation is for any one involved. It would be good to understand why this is happening and what needs to be done to bring more tests under the open umbrella.
The goal is to have a real acceptance test in a shared repository.


Tuesday October 16, 2012 11:00am - 11:40am PDT
Emma C

11:00am PDT

HPC for OpenStack
This session will include the following subject(s):

Scheduler for HPC with OpenStack:

There is a strong interest in HPC with OpenStack. As the current scheduler is not adequate to handle scheduling needs for HPC, we (USC-ISI) are working on a scheduler for HPC with OpenStack. We will show progress so far and our prototype scheduler including proximity scheduler. We will get feedback on them, understand OpenStack HPC community's need, and discuss future directions.

HPC for OpenStack:

The OpenStack HPC community is growing with popular support for baremetal provisioning, heterogeneous scheduling, and upcoming accelerator support. We'll first briefly describe the current state of HPC in OpenStack. This will be followed by an open discussion about the features that the community is looking for. High performance networking, GPU support in Xen, scheduler and management extensions, and other community-requested features are all options.

We'll continue the discussions from previous Design Summits and hash out a plan for Grizzly and beyond.


Tuesday October 16, 2012 11:00am - 11:40am PDT
Emma AB

11:00am PDT

Oslo Status and Plans

This session will re-state the general idea of openstack-common, review the progress we made in Folsom and discuss general plans for the project in Grizzly.

We will discuss renaming to the project to Oslo, renaming the current repo to oslo-incubator, promoting some APIs out of incubation, a versioning scheme for library releases and a clear-out of stalled incubation efforts.


Tuesday October 16, 2012 11:00am - 11:40am PDT
Annie AB

11:00am PDT

quantum: improving IPv6 support / pluggable IP allocation
split session

This session will include the following subject(s):

Make IP allocation algorithm pluggable:

The default IP allocation algorithm works well on internal networks with its quick reuse and compact IP allocation strategy. On shared or public networks, this allocation scheme may not meet the providers security needs (rapid re-use by another tenant). In Grizzly, IP allocation should be pluggable by network, so that providers can choose the strategy that best fits their needs after weighing the space/time cost of each approach.

Proposed allocation approaches include:
first available ip
random
least recently used

Proposed recycling approaches include:
expiration delays
tenant affinity (ie on shared network a tenant is likely to get back the address they just released)
address space exhaustion before reclamation
DNS aware expiration (ie don't expire if DNS record is still pointing at IP).



Improving IPv6 support :

Quantum has rudimentary IPv6 support. In Grizzly we need to expand and polish the implementation.
- better DHCPv6 support
- fully working RA
- L3 Router support
- v6 firewall and/or security group support
- Automatic v6 subnet allocation/creation


Tuesday October 16, 2012 11:00am - 11:40am PDT
Windsor BC

11:00am PDT

Unconference

Content will be scheduled on site.


Tuesday October 16, 2012 11:00am - 12:30pm PDT
Maggie

11:50am PDT

Framework and APIs for advanced service insertion
The existing Quantum API provides APIs for based L2 and L3 communication. There is a lot of demand for inserting higher-level services, for example, firewalling, load-balancing, VPN, etc.

The discussion will talk both about generic mechanisms for extending the L2/L3 model to insert services. We will not discuss the design of individual service APIs (LB, FW, etc.) other than as example of how one such service insertion mechanism might be better suited than another.

We will also talk about a mechanism by which third-parties can services APIs that are independent of whatever "core plugin" is running.


Tuesday October 16, 2012 11:50am - 12:30pm PDT
Windsor BC

11:50am PDT

Making Sense of Nova's Config Options
Nova has 527 configuration options in Folsom.

This session will discuss some practical plans to start bringing some order to these in Grizzly by scoping, grouping and classifying our existing options.

We should also have a brief discussion about our general attitude towards configuration options - e.g. when should you we add a new option, how should defaults be chosen and how to avoid continued option proliferation.


Tuesday October 16, 2012 11:50am - 12:30pm PDT
Emma AB

11:50am PDT

oops framework for production error collection
Lots of logging to log files is great and all, and things like rsyslog are too - I mean, we pretty much know how to collect and aggregate the actual log messages into a place where we can see them. BUT - what about the content that we put into the log messages in the first place?

oops - http://pypi.python.org/pypi/oops/0.0.13 - was developed with that problem in production environments in mind. It's a framework for serialization and deserialization of software problem reports, as well as various sets of tools for reporting and analysis.

Since at the moment the main way to track down problems is by trolling through things and doing timing corellation, I'd like to chat about if and how we could make use of oops across the openstack projects. I think it would make tracking down problems in integration tests, as well as in actual production much more straight forward.


Tuesday October 16, 2012 11:50am - 12:30pm PDT
Emma C

11:50am PDT

Unified CLI, take 2

During the Folsom summit we defined some requirements for a unified command line client program for OpenStack (http://wiki.openstack.org/UnifiedCLI). Since then we have begun development and made significant progress, but the project has stalled out a bit. During this session we will discuss restarting the project and find additional contributors.


Tuesday October 16, 2012 11:50am - 12:30pm PDT
Annie AB

1:50pm PDT

Adding optional security to RPC

As the standard RPC call are being used more and more to convey information that may need to be auditable, some messages could need to carry additional security. Namely, some message could need to be signed and contain an incremental number incremented on each host.

If done correctly, this could allow to trace a series of messages (for example, billable events) and ensure that it has not been tempered with and that messages are not missing.

See also session http://summit.openstack.org/cfp/details/118 which is a precursor to that.

Based on a discussion we started with Eric Windisch, this effort is common with the work he stated and we will be joining our efforts in this session and hopefully in the future work.

https://blueprints.launchpad.net/nova/+spec/trusted-messaging



Tuesday October 16, 2012 1:50pm - 2:30pm PDT
Annie AB

1:50pm PDT

Compute Drivers and Hypervisors
This session will include the following subject(s):

Splitting out compute to it's own service:

There is a lot going on in nova. I feel that nova is the scheduler and orchestration point for compute services and would like to have a discussion about abstracting the "hypervisor" and "container" services into a defined API and making it easier to maintain features, parity, and enhancements there without touching the core nova code base.

Lock the hypervisor guys in a room:

A general discussion between people who work on the different hypervisors, to ensure that:

- code is shared
- code is outside the hypervisor when at all possible
- different hypervisors have the same behaviours
- great ideas propagate

Specific points I'd like to see discussed are injection (it looks like it shouldn't be hypervisor specific but it is; it's hard to add new file injection methods for unsupported filesystems, different config disks or even different network interface file formats), firewalling, proliferating VIF implementations and device mapping.


Tuesday October 16, 2012 1:50pm - 2:30pm PDT
Emma AB

1:50pm PDT

Quantum L3 and Services API (proposal)
What is it:
- proposal for integrated connectivity ( L2 and L3 ) and services (security, load balancing, internet access, NAT, QoS, ...) APIs for Quantum.

Goal:
- abstract APIs that decouple desired connectivity/services specification from implementation, enabling coherent specification of connectivity and services APIs with repeatable pattern for future extensions. This would allow Quantum to manage underlying network comprised of best of breed virtual and physical systems.
- compatibility with existing API v2.0



Tuesday October 16, 2012 1:50pm - 2:30pm PDT
Windsor BC

1:50pm PDT

Use of testtools and testrepository
Currently we make heavy use of nose to do our test running, both in tempest and in the projects, but nose is invasive and when it has issues itself it'll toss away results about the work it's done. There are a bunch of different ways to deal with that ... but the one I'd like to propose and chat about is that we refactor our unittests to be 100% compatible with the standard python unittest framework, augmented by the testtools library (which does many of the things that we've done in openstack in some of our custom base classes and helper functions already) At that point, we can start using other test runners, like testrepository. which allow us to do things more easily, such as only running tests which failed the last time we ran the test suite (useful for a dev cycle focused on fixing a bug) We can also begin to implement parallel testing in tempest.

Simple docs on testrepository parallel runs can be found here:

http://bazaar.launchpad.net/~testrepository/testrepository/trunk/view/head:/doc/MANUAL.txt

It's focused on parallel single-machine at the moment, but the hard guts are there, so spreading the load onto multiple machines should not be hard.


Tuesday October 16, 2012 1:50pm - 2:30pm PDT
Emma C

1:50pm PDT

Unconference

Content will be scheduled on site.


Tuesday October 16, 2012 1:50pm - 6:00pm PDT
Maggie

2:40pm PDT

Metering Network Resources: Ceilometer Integration
This will be a working session for discussing the changes to ceilometer and quantum necessary to integrate them to enable metering of network resources. The goal is to develop a general design and task break-down for the work to be done during the Grizzly release timeframe.


Tuesday October 16, 2012 2:40pm - 3:20pm PDT
Windsor BC

2:40pm PDT

Services Framework for Command and Control

The services framework from nova is in the process of being moved into openstack-common. I'd like to propose the following additions to the services framework to make it more useful:
* Signal Handling
--HUP to reload configs
--TERM die as soon as threads are finished
* Command and Control over RPC:
Pause (stop consuming regular queue messages)
Resume (continue consuming regular queu messages)
Reload (reload configs and continue)
Restart (run a new copy of the daemon and terminate this one)
* Potential Command and Control:
ChangeConfigOption (does it change the on disk config or just the running one?)
StartPaused (start a new copy of the code Paused, requires running simultaneous code)
SoftTerminate (terminate the instance when threads are finished)
NOTE: The last two could be useful for the following strategy:
a) StartPaused
b) Pause (this one)
c) Resume (new one)
d) SoftTerminate(this one)


Tuesday October 16, 2012 2:40pm - 3:20pm PDT
Annie AB

2:40pm PDT

SmokeStack as a multi-distro test system
Since Cactus SmokeStack has proven to be a useful system for both upstream and downstream testing. The system makes use of real packages and uses config management to setup OpenStack and run test suites like Torpedo, Tempest, and the Nova Smoke Tests.

Over the past two years the system has used both Fedora and Ubuntu as a basis for its testing but there is nothing prohibiting other distros from collaborating as well. Lets how we can make the system better in a multi-distro sense to be able to catch things both upstream and down to help make OpenStack better.

Topics to be discussed:

-how to help plug new distros into SmokeStack
-patterns for keeping packages up to date
-using this as a tool to help upstream testing


Tuesday October 16, 2012 2:40pm - 3:20pm PDT
Emma C

3:40pm PDT

Defining an upgrade path for Quantum
With Quantum now being part of Openstack core, it is of paramount importance that users are provided with a simple and reliable upgrade path, beyond package upgrades.

This should include at least database synchronization, but possibly also component versioning (for instance if you try and run a v1 plugin against the v2 API this should be properly checked).

The aim of this session is to:
1) discuss which are the key upgrade areas
2) discuss alternatives (and possibly approve a proposal) for handling db upgrades and component versioning (if deemed necessary)



Tuesday October 16, 2012 3:40pm - 4:20pm PDT
Windsor BC

3:40pm PDT

OpenStack configuration Testing
At present, automated testing only tests a single configuration. We are facing an explosion of configurations that we expect will actually be used:

* Hypervisor
* Quantum vs. nova-network
* Cinder vs. nova-volume
* mysql vs. postgres

In addition, some of these components have important sub-configurations. We need to be testing more configurations but can't possibly test them all. Some of these need real hardware configurations to be properly tested.

We need to choose a set of configurations to include and marshal the hardware resources. At the same time, it should be possible for any one that wants to add another configuration to the test be able to do so by providing the hardware resources.

Pruning the configuration tree requires the input of those with deep architectural knowledge of the component interactions. Only they can say which configurations are "different" enough that they should be included.


Tuesday October 16, 2012 3:40pm - 4:20pm PDT
Emma C

3:40pm PDT

State Management
This session will include the following subject(s):

Coordinate all the State!:

One of the design tenets of Openstack is "Accept eventual consistency and use it where it is appropriate". This is clearly necessary in a distributed system as OpenStack. However "accepting it" does not make it work. OpenStack (nova in particular) relies mainly on the database for coordination (and it does not use features like foreign keys that might help with that). Using a message queue is also a good idea, but using a MQ + a DB and thus having two state "channels" does not make the problem easier (example: nova-compute service).

Moving out services into separate projects is generally a good idea but it only increases this problem.

I think it is time to talk about state coordination and some guidelines developers could use to increase consistency of internal state of the system. Just adding "#TODO: Sometimes, strange things happen here. Might be a concurrency bug" does not solve the problem.

This problem is hard. But we should start to solve it.

Recovery of instances from inconsistent state:

If some OpenStack service goes down(or is already down), while processing a request, the corresponding instance remains in an inconsistent state (some 'ing' state) for various scenarios. There are a limited set of operations possible on such instances, mostly leaving the instance in an un-usuable state. Such instances also continue utilizing the resources with no productivity.
Therefore, they need to be identified, put into a stable state, and release the associated resources if no longer required.


Tuesday October 16, 2012 3:40pm - 4:20pm PDT
Emma AB

3:40pm PDT

Using the message bus for messages

Using the message bus for messages

The RPC library in openstack-common includes an API for sending and receiving RPC calls and for sending notification messages. It does not include usable APIs for subscribing to notifications, or for sending and receiving generic messages to be consumed by multiple workers. The ceilometer and quantum projects are hijacking notifications and using a low-level API to achieve this goal, but we need to add a public API and ensure that all of the RPC drivers support the pattern. The goal of this session is to agree on the basic design for such an API to be added during Grizzly.

Along with a way to receive notifications, we should also consider adding a more generic message API that is not tied to RPC. For example, as adoption of the ceilometer project grows, we may want to include a library for creating and sending metering messages from other services. We have such a library, using RPC right now, and it is mostly decoupled from the rest of ceilometer. At DreamHost we are building a service that will use the library, which will help us discover any other issues with removing it from ceilometer. It really shouldn't need to be based on RPC, though, since the messages are one way and meant to be consumed but need no reply.


Tuesday October 16, 2012 3:40pm - 4:20pm PDT
Annie AB

4:30pm PDT

Choosing a WSGI framework for API services

The goal of this session is to discuss whether the existing WSGI framework in openstack-common should be retained and used for future API work, or whether it makes more sense to look at some of the other Python web API frameworks and adopt something being used and maintained by the broader community. I will put together a few notes about why I chose not to use the openstack-common framework for the ceilometer API, and some pros and cons of other frameworks that we should evaluate.


Tuesday October 16, 2012 4:30pm - 5:10pm PDT
Annie AB

4:30pm PDT

Gating with integration testing
Tempest is now gating commits into OpenStack projects, but it's only the smoke tests because Tempest takes too long to run a full set of tests (which will only get worse with time). This has allowed bugs to get into projects which we had tests for.

There are multiple approaches that might work here. Parallelizing the nose tests (nose bugs currently preventing it), manually breaking up the tempest tests into parallel chunks, looking at a different (non nose) testing framework that has parallelism as feature. This design summit sessions would be focused on figuring out the path forward to get all of the integration testing in tempest able to run as part of the gating process during the Grizzly cycle.


Tuesday October 16, 2012 4:30pm - 5:10pm PDT
Emma C

4:30pm PDT

LBaaS 1 - use cases and requirements
In addition to the tenant and provider facing APIs the additional functional requirement should be discuss with the following list as an example:
Plugin Model
- Service capabilities versus plugin capabilities versus middleware
capabilities
- Multiple simultaneous plugins (to support multiple LB models)


Quantum Integration/dependency
- should the service be deployable standalone (without Quantum) ?

Additional dependencies/integration
- keystone
- horizon



Tuesday October 16, 2012 4:30pm - 5:10pm PDT
Windsor BC

4:30pm PDT

New features on bare-metal provisioning framework
The overview and current status of bare-metal provisioning framework will be presented
in the [Related Open Source Projects] track of speaker session on Monday.

In this summit session, we will discuss new features that will be added into bare-metal framework.

We (USC/ISI and NTT docomo) plan to add fault-tolerance features
for both bare-metal nova-compute and bare-metal database nodes,
since the failures of bare-metal nova-compute or database affect to whole bare-metal machine farm.
Also we will talk about modified nova-scheduler to support bare-metal provisioning.

NEC has a plan to talk about security enhancement with Quantum/OpenFlow and network/cinder isolation.

Calxeda has a plan to talk about deployment support and bare-metal testing on Calxeda systems.

HP has a plan to talk about CI process for testing bare-metal.

http://etherpad.openstack.org/GrizzlyBareMetalCloud


Tuesday October 16, 2012 4:30pm - 5:10pm PDT
Emma AB

5:20pm PDT

Improving Boot-from-Volume
Booting an instance using volumes is wonky and inefficient. There are several things we can do to make it awesome:

- Minimize data transfer when booting from volumes
- no HTTP image download, use fast cloning, etc
- Allow Nova to boot instances on volumes by default
- Clean up UX


Tuesday October 16, 2012 5:20pm - 6:00pm PDT
Emma AB

5:20pm PDT

LBaaS 2 - Tenant APIs
Tenant API
- review of comparison of various implementations
- Review a proposal for common resource model for tenant api

- discuss the Implementation as either a single endpoint with extensions
or a different endpoint.
For reference, nova is using both : nova-api expose different endpoints
(os-api, metadata, ...)
and extensions for admin actions (e.g. http://goo.gl/udpkK)
- discuss proposed resource model for provider API

Youcef leading this one?


Tuesday October 16, 2012 5:20pm - 6:00pm PDT
Windsor BC

5:20pm PDT

Performance and Scalability testing
This is a brainstorming session on enabling a performance & scalability testing process for Openstack. This will summarize current activities in this realm if any, talk about possible unification, and steps going forward. This will also point to frameworks/ tools that will come handy


Tuesday October 16, 2012 5:20pm - 6:00pm PDT
Emma C

5:20pm PDT

XML Request/Response Processing

XML request and response processing in OpenStack has been a hot topic recently . The OpenStack code and development community has a large bias towards JSON and there have been proposals to deprecate support for XML. As a result, XML Request/Response processing is often not available or it is incorrect. However, XML is highly desired for accelerating adoption of OpenStack within the enterprise.

This session will be an open-ended discussion on various proposals on how to improve the XML request/response processing in OpenStack. One possibility is that a framework could be leverage or developed that would enable and make it much easier for OpenStack developers to support XML in all of the OpenStack services.

This session will explore potential solutions for XML processing around determining if a reasonable framework is developed to add it to all the OpenStack services. Additionally, we would like to discuss an overall process can be defined to resolve the current issues with XML as well as address future requirements in this area.



Tuesday October 16, 2012 5:20pm - 6:00pm PDT
Annie AB
 
Wednesday, October 17
 

11:00am PDT

entrypoints based plugins

I started a set of patches towards the end of the folsom cycle to allow the use of entry points for plugin and extension loading in nova. This was rightly rejected as a new thing that hadn't so much been discussed. However, in general I still like the idea of it, given that python does have this nice mechanism for plugin loading.

Why don't we walk through what it looks like, whether we should do it and how.


Wednesday October 17, 2012 11:00am - 11:40am PDT
Annie AB

11:00am PDT

Moving VPN support to Quantum
The Cloudpipe VPN service currently works as a service VM controlled by Nova. In Grizzly, we should examine moving the VPN service to Quantum. The benefits include:
- direct access to network infrastructure
- VPN process running in network namespace is less resource intensive
- easier to integrate VPN into other security features added to Quantum





Wednesday October 17, 2012 11:00am - 11:40am PDT
Windsor BC

11:00am PDT

Upgrade testing with Grenade
Grenade is a new framework for testing OpenStack upgrades using DevStack. Topics to discuss include:
* documenting upgrade steps
* automating the Grenade testing
* gating rules, if any (i.e., should it be run on every commit?)


Wednesday October 17, 2012 11:00am - 11:40am PDT
Emma C

11:00am PDT

XenAPI driver Roadmap
Get everyone interested in working on the XenAPI driver in the same room to discuss about Grizzly and beyond.

I would be good to discuss the plans around blueprints Citrix are looking at implementing for the XenAPI driver during Grizzly:
- Config Drive
- GPU Pass-through
- Live Migration enhancements
- Storage enhancements (if not covered in a Cinder session)

We should decide what to do about images with separate kernel images. We either need to make them first class citizens in XenAPI, or look to deprecate their use in XenAPI.

It would be good to see what other people hope to working on in the XenAPI driver in Grizzly and co-ordinate our efforts.

It would also be good to brain storm ideas around other improvements we think would be useful in the next few releases.

We can also discuss any issues related to XenServer and XCP that we can communicate to the Citrix XenServer Product Team.


Wednesday October 17, 2012 11:00am - 11:40am PDT
Emma AB

11:50am PDT

CirrOS, cloud-init: the future of cloud guests
This session will include the following subject(s):

Dive into cloudinit:

It'd be great to go over cloudinit with everyone, provide examples of what it can do and how it does it (technically) and how it can be used with openstack (config drive, ec2 metadata...). What some of the new features were for folsom. Ideas about what people want in the future would also be greatly appreciated...

CirrOS and the future of cloud guests:

CirrOS [http://cirros-cloud.net] is a small linux distribution designed primarily for quick validation of the ability to boot instances.

Some of its features:
* support for executing user-data
* support for config-drive-v2 and ec2 metadata source
* very small (less than 15M download)
* boots very quickly
* boots in LXC or kvm
* acpid (poweroff via acpi event)

I'd like to make people aware of this, and see what other things they'd like it to have it do.



Wednesday October 17, 2012 11:50am - 12:30pm PDT
Emma C

11:50am PDT

Hyper-V Nova Compute features in Grizzly
Hyper-V is a great free Hypervisor developed by Microsoft. A new Hyper-V Nova Compute driver has been recently merged in the Nova code base in time for the Folsom release, thanks to the combined effort of Microsoft, Cloudbase Solutions and the great developers in our community.
In this session we'll demo how to setup a Hyper-V 2012 based Folsom infrastructure with Linux, Windows and FreeBSD instances, showing also great features like Live Migration and Replica. We will also talk about all the upcoming Grizzly features currently under development!


Wednesday October 17, 2012 11:50am - 12:30pm PDT
Emma AB

11:50am PDT

LBaaS 3 - Provider APIs
Serge is leading this session



Wednesday October 17, 2012 11:50am - 12:30pm PDT
Windsor BC

11:50am PDT

Unified rootwrap & password management

This session will primarily focus on merging rootwrap into openstack-common and further improvements to it. Time at the end of the session will be assigned to discuss incorporating keyring usage into the service infrastructure and any further service security infrastructure ideas.

This session will include the following subject(s):

Towards a unified and more featureful rootwrap:

Multiple projects (Nova, Cinder, Quantum) have adopted nova-rootwrap, so moving it to openstack-common sounds like a good idea to avoid code duplication and painful sync.

In this session we will discuss the plan to push rootwrap into openstack-common, as well as additional features for rootwrap (path searching, logging, Python code execution).

All your passwords belong to keyrings?:

Clients are starting to use python-keyring for passwords. It would seem to make sense to have other sensitive passwords also use a similar mechanism (for example in nova.conf, or in keystone.conf or in paste api ini files and so on). These places shouldn't have clear text passwords even though they do right now (eck). But I'd like to get input on what people think about that and possibly any issues they see.


Wednesday October 17, 2012 11:50am - 12:30pm PDT
Annie AB

1:50pm PDT

A common database

Database code now exists in several projects, and much of it needs improvement. While improving the database abstraction itself is a good thing, it is difficult to do across various projects without it being moved into a common place. Additionally, there is code being sought for inclusion into openstack-common (oslo) which requires database access. For these reasons, a blueprint has been registered to move the database abstraction into common.

I will discussion my intentions, the new database library architecture, and how this will affect other blueprints such as db-threadpool and no-db-compute.


Wednesday October 17, 2012 1:50pm - 2:30pm PDT
Annie AB

1:50pm PDT

Adding OpenVZ support to Nova
Some folks at different organizations are working on adding OpenVZ support to Nova. Let's get together and talk about this.


Wednesday October 17, 2012 1:50pm - 2:30pm PDT
Emma AB

1:50pm PDT

Future of DNS in Openstack
Today, DNS is implemented in Nova. However, with the consolidation of network services in Quantum, it would make sense to re-evaluate the DNS implementation so that DNS configuration and management is more tightly integrated with DHCP and IP address Management.
In addition of the consolidation, this would give us an opportunity to consolidate multiple efforts, and enable additional features in Openstack to support a more general purpose DNS provisioning solution.


Wednesday October 17, 2012 1:50pm - 2:30pm PDT
Windsor BC

1:50pm PDT

Multi-backend support for Cinder

Most drivers in Cinder don't deal with local storage rather have a backend storage and an associated api. As an operator, managing multiple backends with Cinder becomes a hassle as each backend requires a single (or more for HA) instance of volume-manager.

This sessions wishes to answer the following and more:
- Who would use it?
- What are the benefits?
- What are the downsides?
- Do the drivers need to change to accomodate this?
- How do we schedule, how to choose the right backend?
- Does scheduling support for this fall under volume_types or a new volume_backend?


Wednesday October 17, 2012 1:50pm - 2:30pm PDT
Emma C

1:50pm PDT

Unconference

Content will be scheduled on site.


Wednesday October 17, 2012 1:50pm - 6:00pm PDT
Maggie

2:40pm PDT

Instrumentation Monitoring

For performance, metrics and scaling tasks there is a strong need to have various components/code instrumented (call times, error counts, roundtrip times...). Currently ceilometer has similar data but different fundamental requirements so this session should talk about whether to augment ceilometer with this data or create a new tool. New code 'decorators' (which are not relevant to ceilometer) need to also be added to gather this information.

Many stackers are depending on OpenStack for large scale and revenue generating operations. In these environments, it is critical to monitor the health and optimize the performance of OpenStack components.


Wednesday October 17, 2012 2:40pm - 3:20pm PDT
Annie AB

2:40pm PDT

Performance Evaluation
OpenStack performance evaluation in a large scale Cloud environment, providing insight into various configuration options, based on performance metrics such provision latency, runtime performance and reliability.


Wednesday October 17, 2012 2:40pm - 3:20pm PDT
Emma AB

2:40pm PDT

Quantum orchestration / ARM support for Quantum
split session

This session will include the following subject(s):

ARM Support for Quantum:

In the last couple of releases we made great progress with Nova, Glance, and Keystone running on ARM. We have to keep this progress up, but now we have to focus on Quantum as well.

Newtonian - Network Orchestration Service:

We have been running into limitations using Quantum at scale. In order to solve our use case we are building Newtonian. Newtonian will have a nova oriented API, a public accessible quantum API, an authoritative database, and have a plugin layer on the backend supporting quantum plugins. This has started as a Rackspace specific use case, but we want to make it public so others can play with it and contribute as well.


Wednesday October 17, 2012 2:40pm - 3:20pm PDT
Windsor BC

2:40pm PDT

Volume types, extra specs, QoS

Expand on the discussion had on irc and on etherpad on use cases for volume_types and how the drivers / scheduler should handle it. Talk more about metrics the drivers need to expose to the scheduler.

http://etherpad.openstack.org/cinder-usecases


Wednesday October 17, 2012 2:40pm - 3:20pm PDT
Emma C

3:40pm PDT

Cinder New Features for Grizzly

This session will include the following subject(s):

New features:

New features that are too small on their own for a slot on their own:

Secure attach - modifying the attach path to go via a cinder control node before going to the compute node, so that the complete compromise of a compute node does not result in any more volumes being exposed than already happen to be attached to that node.

Retain glance metadata for bootable volumes
https://blueprints.launchpad.net/cinder/+spec/retain-glance-metadata-for-billing

List bootable volumes - proposal for the concept of a bootable volume - purely volume created from glace, for UI easy of design

Volume backup - an API to copy a volume to object store

IOPs meeting / billing
https://blueprints.launchpad.net/cinder/+spec/volume-usage-metering

Volume resize

Volume status state machine (ClayG)


Wednesday October 17, 2012 3:40pm - 4:20pm PDT
Emma C

3:40pm PDT

Improving Quantum Firewalling
split session

This session will include the following subject(s):

Packet Filter API and its drivers:

A packet filtering API for Quantum network will be proposed.
This API provides a fine-grained packet filtering where
each filter entry consists of matching fields, an action and a priority.
The matching fields consists of in/out quantum port-id, src/dst mac/ip addr/L4 port number
and so on and they are similar to iptables and OpenFlow matching fields.

While the security group exists in OpenStack as a packet filtering feature,
this API provides more fine-grained packet filtering like outgoing packet filtering from VMs,
inter-VM communication and . In addition, some usecases requires controlling packet filtering
rule based on its operational status: for example, bare-metal computing support requires some communications is allowed only during an instance booting.

We believe such primitive (low-level) API is useful for these usecases.
The security group feature can be implemented on top of this API.

This API can be implemented by various method (iptables, firewall appliance, OpenFlow-based filtering
and so on) and plugin (or driver) architecture would be suitable.

Firewall API for quantum:

API for stateful firewall at the gateway.


Wednesday October 17, 2012 3:40pm - 4:20pm PDT
Windsor BC

3:40pm PDT

no-db-compute
The "no-db-compute" project is an effort to remove direct database access from compute nodes. We made some good progress in this direction in Folsom and we need to make more in Grizzly.

This session will:

- quickly recap why we want to do this in the first place
- recap what was done in Folsom
- discuss known obstacles to overcome
- discuss next steps to take in Grizzly development
- recruit developers to help with the work left to do


Wednesday October 17, 2012 3:40pm - 4:20pm PDT
Emma AB

3:40pm PDT

Standardizing client & API capabilities

OpenStack's clients and APIs are the first point of contact for users of OpenStack clouds, and the foundation upon which tools can be built. As such, it's time we start focusing on making that a first-class consistent experience.

Simple things like providing consistently named and implemented methods, and consistent sets of features (wherever implementation allows). If I've used one core OpenStack API I should be able to expect the same from another.

Examples of these types of features include:

* Filtering a list of resources on any attribute of the resource. (e.g. GET /servers?security_group=foo)

* Ordering (timestamp, alphabetical, etc.)

* Bulk actions (GET in bulk, DELETE in bulk, etc.)

* Many more...

The goal of this session is to determine a common set of features the community/user-base needs, and make that a contract which every API and client should strive towards.

It is *not* meant to rewrite any existing APIs, or make any backwards-incompatible changes. This is only to agree on our collective future and perhaps look at the easiest initial targets.


Wednesday October 17, 2012 3:40pm - 4:20pm PDT
Annie AB

4:30pm PDT

Cinder API 2.0 and beyond

The current volume api needs some TLC. Bringing a volume service to production has helped us identify areas where the current volume API is lacking. I would like to propose that we take a look at what can be done to whip the current volume api in shape to make a v2.0 of the api, and begin discussion of where the api should go down the road.


Wednesday October 17, 2012 4:30pm - 5:10pm PDT
Emma C

4:30pm PDT

LBaaS 4 - implementation planning
This session will include the following subject(s):

LBaaS 4 - Implementation planning :

talk about actual design of code modules, who will be working on what parts of the code, etc.


Wednesday October 17, 2012 4:30pm - 5:10pm PDT
Windsor BC

4:30pm PDT

State of CI systems and work for next cycle

A short session, hopefully early in the week, where we can talk about what we accomplished this past cycle in the CI and dev infrastructure world, as well as what it is that we're planning to do already. Hopefully if we tell everyone what's already on the table before we start planning other things, it can be in the back of everyone's heads as we plan during the rest of the week.


Wednesday October 17, 2012 4:30pm - 5:10pm PDT
Annie AB

4:30pm PDT

The road to live upgrade of an OpenStack cloud
There is a stated community goal of N to N+1 in place live upgrade of an OpenStack cloud. In order to help us get there we should have a summit session in which we try to capture all the current inhibitors to making that work. This will take longer than Grizzly, but having focused time to expose all the issues would be a great thing to do while at summit.


Wednesday October 17, 2012 4:30pm - 5:10pm PDT
Emma AB

5:20pm PDT

Error handling and recovery
Nova has traditionally done poorly in the face of errors and recovering from them. All errors are either treated as fatal, or logged in a way that isn't visible to the user.

This is complicated by the fact that the API only allows asynchronous errors to be returned when the instance is the ERROR state.

Changes are necessary to the API and nova implementation to allow non-fatal errors to be communicated to users.


Wednesday October 17, 2012 5:20pm - 6:00pm PDT
Emma AB

5:20pm PDT

Lunr: What, How, and Why

Rackspace Cloud Block Storage is launching soon. I'd like to describe briefly what the team built, what parts Lunr and OpenStack play in the product, and then dive into some lessons learned in the design and development of a scalable commodity hardware based storage solution and interfacing with nova-volumes, cinder, and the xen storage manager. After that, if anyone is still awake, probably end the talk with a quick wrap up highlighting some of the areas we think OpenStack volumes has the biggest opportunities to improve and with the communities blessing how we can contribute!


Wednesday October 17, 2012 5:20pm - 6:00pm PDT
Emma C

5:20pm PDT

quantum team leadership & summit wrap-up
half-session

non-technical community session to discuss plans for scaling leadership within the Quantum team. The quantum project is growing quickly, and is taking on a wider breath of topics (particularly as we move to higher-level services).

This session will propose a strategy where we have different "sub-systems" of quantum, each with their own lead, similar to nova.


Wednesday October 17, 2012 5:20pm - 6:00pm PDT
Windsor BC

5:20pm PDT

Vulnerability Management Team

The OpenStack Vulnerability Management Team is responsible for handling the process for handling security vulnerabilities that have been reported to the project. The purpose of this session is to discuss any potential improvements that could be made to our handling of security issues.


Team info: http://www.openstack.org/projects/openstack-security/

Process: http://wiki.openstack.org/VulnerabilityManagement


Wednesday October 17, 2012 5:20pm - 6:00pm PDT
Annie AB
 
Thursday, October 18
 

9:00am PDT

Drive more automation from commit messages

Currently we trigger a number of changes to Launchpad based on content of commit messages submitted to Gerrit: bug status changes in particular.

This has proven very useful but we can do better, in particular to drive blueprint status from commit messages. This may involve stricter rules (header-style data in commit messages ?) to be able to specify precise things like Related-Bug, Fixes-Bug, Finalizes-Blueprint, Partially-Implements-Blueprint, etc. This session will discuss if/how/what we should implement.

Time permitting, we'll look into other Gerrit improvements, like a LP-prioritized review list (think http://reviewday.ohthree.com/)


Thursday October 18, 2012 9:00am - 9:40am PDT
Annie AB

9:00am PDT

Local Storage Volume plugin for Cinder

The goal of this blueprint is to implement a driver for cinder. This will allow to create volume in local storage and back up point-in-time snapshots of your data to Swift for durable recovery. This snapshots are incremental backups, meaning that only the blocks on the volume that have changed since your last snapshot will be saved.
Even though the snapshots are saved incrementally, when you delete a snapshot, only the data not needed for any other snapshot is removed. So regardless of which prior snapshots have been deleted, all active snapshots will contain all the information needed to restore the volume. In addition, the time to restore the volume is the same for all snapshots, offering the restore time of full backups with the space savings of incremental.

All in a word, our solutions is local storage + qcow2 image + dependent snapshot + swift. This is like http://wiki.cloudstack.org/display/RelOps/Local+storage+for+data+volumes, but we have incremental snapshot than cloundstack.

If you are interested in this topic, please read the full specification: http://wiki.openstack.org/LocalStorageVolume


Thursday October 18, 2012 9:00am - 9:40am PDT
Emma C

9:00am PDT

Unconference

Content will be scheduled on site.


Thursday October 18, 2012 9:00am - 12:30pm PDT
Maggie

9:50am PDT

Keystone Internals
For people new to Keystone development, this session attempts to provide insight into how the Keystone code is structured.


Thursday October 18, 2012 9:50am - 10:30am PDT
Windsor BC

9:50am PDT

Managing Translations (I18N and L10N) in OpenStack

We are migrating the translation hosting for OpenStack projects to Transifex. To make use of these translations two things need to happen.

1) Pull translations from Transifex and merge them into the git repositories for the OpenStack projects. Jenkins is currently doing this for Nova using two jobs. The first pushes updated translation template files to Transifex when changes are merged and the second pushes translation changes to Gerrit once a day. This is working for Nova but is not flexible enough for Horizon and OpenStack Manuals. Need to discuss the expectations of each project and ways to make the Jenkins Job interface common across projects (and if a common interface is not possible determine the subsets that are needed).

2) Once projects have included translations in the git tree/packages/etc the projects need to be able to consume these resources. I believe Horizon may be the only project currently capable of doing this. For example Nova assumes the translations are located in a system specific directory which is not the case if Nova has been installed from source. Need to determine how best to consume translation resources. Perhaps a module is needed in openstack-common.



Thursday October 18, 2012 9:50am - 10:30am PDT
Annie AB

9:50am PDT

nova api consistency
This session will discuss some of the issues in the existing nova API (both core and extensions), and how to address them going forward.

There are at least three classes of issues:

1) outright bugs (i.e. deleting an already deleted floating ip returns 202, not 400)

2) inconsistencies in return codes (when is 200 used vs. 202)

3) holes in function (ip pools are gettable via API, but can only be created with nova-manage)

There is also the need to have a stable API.

What of these issues can we address in 2.0 without breaking anyone?
What about a 2.1 which is limited to just return code cleanup?
What is the plan for 3.0, both time frame, and adding new APIs to the core content?



Thursday October 18, 2012 9:50am - 10:30am PDT
Emma AB

9:50am PDT

Unified Storage Management for OpenStack

Today, OpenStack has support for block-based storage and object storage, but there is no explicit support for file-based storage. For certain applications, file-based storage has significant advantages over block storage and object storage, and a large amount of existing software is designed around file-based storage. We've heard demand from the operator community for a common management interface for file-based storage, and ideally, a common management interface for storage in general. We're proposing adding management of CIFS and NFS file sharing to OpenStack, and we believe that extending Cinder is the most logical way to achieve this, both because Cinder is already in the business of managing storage devices, and because users don’t desire yet another management interface. NetApp has developed a prototype to this end. It includes extended APIs as well as an implementation of a driver backend for Cinder. We'd like to unveil our progress and discuss these with the developer and user communities to see if this approach makes sense, and achieve consensus on a path forward.


Thursday October 18, 2012 9:50am - 10:30am PDT
Emma C

11:00am PDT

Building on Horizon: What's New + Builders' Q&A

Horizon is the framework for building the OpenStack Dashboard, but anyone can use it to build whatever dashboards they like.

This session will quickly recap the Essex "Building on Horizon" basics, dive into new features like Workflows, and leave plenty of time for people who are currently working on building new components for the dashboard to ask specific questions.


Thursday October 18, 2012 11:00am - 11:40am PDT
Emma C

11:00am PDT

LDAP and Active Directory integration
LDAP works, but there have been a lot of discussion about how it is supposed to work in the future. This will hammer out LDAP default schema, configuration, and Active Directory integration.


Thursday October 18, 2012 11:00am - 11:40am PDT
Windsor BC

11:00am PDT

MLs, IRC, Q&A... The future of community resources

We'll have a look at the current state of our communication tools and community resources: mailing-lists, IRC channels, Forums, Q&A sites, Bugtrackers, and discuss which improvements we can push during the Grizzly timeframe.


Thursday October 18, 2012 11:00am - 11:40am PDT
Annie AB

11:00am PDT

Refactoring API Input Validation
There has been a lot of development and improvements in the nova-api service with many improvements around processing query and data parameters. At the same time the bug rate in this area continues. Maybe it is time to step back, review and potential re-factor some of this code. For example, the process of validating parameters and request bodies could be more consistent and concise. Inputs are still not being completely validated and error messages/responses are still not always supplied. Additionally, it is not easy to understand the list of valid parameters.

There seems like there is an opportunity for the nova-api service to be thorough reviewed for correctness as well as identify patterns and opportunities for re-factoring and improve the overall quality. One possibility is to leverage or create an API input validation framework for the nova-api that significantly reduces the number of lines of code required to process the input parameters, validate that they are valid types/values, as well as make it more consistent and robust. The existing nova-api service could be reviewed and in the process converted to using the new framework.

This session will explore some of the issues and ideas and hopefully define a potential plan and working group to make progress in this area for the Grizzly release.


Thursday October 18, 2012 11:00am - 11:40am PDT
Emma AB

11:50am PDT

Dependency Management

At the Folsom Design Summit we discussed the issue of reconciling the python dependencies of OpenStack projects and establishing a process for reviewing changes to our dependencies.

This session will discuss the progress on the topic since then and our plans for the Grizzly series.

A particular focus of the session will be how we can gather data from distros that will inform our decisions about proposed dependency updates.


Thursday October 18, 2012 11:50am - 12:30pm PDT
Annie AB

11:50am PDT

Horizon + Quantum Improvements

In Folsom we got an initial quantum + horizon integration working. However, there's still a lot more we need to expose, but things that arrived late in Folsom (L3, floating IPs, provider networks) and things that will arrive in grizzly (security groups, load-balancing).

Akihiro will be organizing this session.


Thursday October 18, 2012 11:50am - 12:30pm PDT
Emma C

11:50am PDT

Network Application Rate-limiting
There is a need to detect, log, and possibly rate-limit an instance's outbound network traffic based on it's type and rate. This can help us detect and prevent things like SMTP (spam), SSH brute force, and DDOS attacks, as well as mitigate port-scanning attempts.

Ideally, we would want to be able to dynamically create rules on a per-instance, per-tenant, and global basis, so that trusted parties could increase these limits as required. This could be done via a new Nova API, for example, 'nova app-rate-limit set ...'

HP has an initial implementation in Nova using iptables to both log and rate-limit certain types of network traffic.

Areas for discussion:
- Does this belong in Quantum or Nova ?
- Should it be configured via quotas or a separate mechanism of its own ?
- What traffic patterns does such a system need to be able to detect ? (we have some examples)
- What implementations other than IPtables are people interested in (and can the solution be generalized enough to cover them) ?


Thursday October 18, 2012 11:50am - 12:30pm PDT
Emma AB

11:50am PDT

PKI Future
After an overview of the current state of PKI in Folsom, we'll launch into a discussion of where it needs to go in the near future.


Thursday October 18, 2012 11:50am - 12:30pm PDT
Windsor BC

1:30pm PDT

Horizon Grizzly Overview & Brainstorming

A working group session wherein current plans for Grizzly can be shared and anyone can suggest new features or blueprints they think should be addressed for the OpenStack Dashboard.


Thursday October 18, 2012 1:30pm - 2:10pm PDT
Emma C

1:30pm PDT

Integrated identity system for OpenStack

With the membership to OpenStack Foundation taking over Launchpad as the main source of identification for OpenStack contributors, it's time to think about an identity system for the whole project.

Work is already in progress to integrate the CLA management in OpenStack Gerrit review system with the Members' database, in order to reduce friction during the onboarding of new developers, increase reliability of our Copyright License Agreement.

Mutuated from Google's way of managing CLA, OpenStack Gerrit will comply with Foundation's bylaws and make onboarding easier.

This session will illustrate the new process and report on its status. Anybody that will be hiring developers to work on OpenStack in the future should attend this session.


Thursday October 18, 2012 1:30pm - 2:10pm PDT
Annie AB

1:30pm PDT

multifactor auth support in keystone
We made no progress against https://blueprints.launchpad.net/keystone/+spec/multi-factor-authn during the Folsom release - this is to check on status, revitalize the topic, and see if there's interest in enabling this during the Grizzly release timeframe


Thursday October 18, 2012 1:30pm - 2:10pm PDT
Windsor BC

1:30pm PDT

Support for Licensed Images
Support for licensed images within Openstack has some unique issues which need to be resolved. Licensed images need to be identifiable when downloaded from Glance. Snapshots of licensed images must maintain their licensed identity. Scheduling of licensed images needs to balance license fee requirements with user experience requirements. Billing for licensed images needs to track appropriate license information. Windows images also have unique requirements for handling security, license key activation and other cloud-init like functionality. This discussion will revolve around solutions to these issues within Glance and Nova with an additional emphasis on issues unique to supporting Microsoft Windows instances.


Thursday October 18, 2012 1:30pm - 2:10pm PDT
Emma AB

1:30pm PDT

Unconference

Content will be scheduled on site.


Thursday October 18, 2012 1:30pm - 5:40pm PDT
Maggie

2:20pm PDT

Cross-Project Signaling

OpenStack as a whole performs innumerable actions asynchronously, and in general leaves other projects in the dark as to those goings on. This affects Horizon especially, but enabling one project to learn about events in another project would be useful in myriad ways. For example:

A tenant is deleted in Keystone; instances are subsequently orphaned and left running in Nova, images are orphaned in Glance, etc. If Keystone sent out a signal when that project were deleted, a concerted cleanup action could take place throughout the entire system...


Thursday October 18, 2012 2:20pm - 3:00pm PDT
Emma C

2:20pm PDT

Keystone V3 api - draft and initial implementation
overview of the V3 API spec and what's available and drafting up in the v3-feature branch on Keystone


Thursday October 18, 2012 2:20pm - 3:00pm PDT
Windsor BC

2:20pm PDT

Nova database archiving strategy
At present records in the Nova database are not actually deleted, they are merely marked as deleted. HP has implemented a data archiving solution in Nova which enables us to move database records pertaining to deleted entities (instances, volumes, key pairs , etc) from the database tables used by the Nova application. Our solution moves records relating to entities deleted more than a configurable number of days ago to a set of ‘shadow’ tables containing deleted records. Not yet implemented, but we are also considering further archiving, i.e. removing records from the shadow tables and potentially storing them in text files. As we consider implementing a Folsom/Grizzly version of this solution we’d be interested in how others have approached this problem with a view to shaping a community wide solution.


Thursday October 18, 2012 2:20pm - 3:00pm PDT
Emma AB

2:20pm PDT

Stable Branch

The Folsom cycle was the second cycle where we maintained a stable branch for the previous release. We will look back over the stable/essex maint efforts and identify successes and failures.

The stable-maint process still has some rough edges and so we will discuss ideas for improvements, ideally setting some specific goals for stable/folsom maintenance.


Thursday October 18, 2012 2:20pm - 3:00pm PDT
Annie AB

3:20pm PDT

Improving Nova's database consistency
As Nova moves towards the goal of being highly available and supporting database failover, I believe there needs to be a consistent paradigm in place for all database operations which will work with existing open-source MySQL HA tools. I will highlight examples of how Nova's present handling of transaction state, rollback, and row uniqueness is, in some cases and in my opinion, not conducive to this goal. Then I will open the floor for discussion.


Thursday October 18, 2012 3:20pm - 4:00pm PDT
Emma AB

3:20pm PDT

Policy and Auth Delegation
We are shifting this talk to be specific to the details of implementing authenticated delegation/impersonation and gathering feedback on policy implementations and use cases needed by existing deployments.

This session will include the following subject(s):

Federation:

Federation means different things to different people. This session attempts to layout the range of proposals, and to drive toward a consensus of how to implement in Grizzly.


Thursday October 18, 2012 3:20pm - 4:00pm PDT
Windsor BC

3:20pm PDT

Realtime Communication In Horizon and OpenStack

Currently OpenStack uses a painful conglomeration of queues, push, pull, poll and other forms of communication to pass around the status of what's happening in the system.

A glorious future would be one in which everything is able to communicate in real time with efficient push/pubsub notifications.

This session aims to bring together the various stakeholders (core projects, downstream distros, security experts, etc) to discuss the right solutions for the future whether they get implemented in Grizzly or not.


Thursday October 18, 2012 3:20pm - 4:00pm PDT
Emma C

3:20pm PDT

Tracking OpenStack adoption

One of the objectives of the OpenStack Foundation is to "Make OpenStack the ubiquitous cloud operating system". In order to reach that objective the Foundation needs to understand more about the usage of the OpenStack software.

Until now we've been counting downloads from Launchpad but that source is less and less accurate because OpenStack Distributions have become the main form to test, run and deploy OpenStack. Something like the Canonical Census that sends the daily anonymous ‘I’m alive’ ping for each Ubuntu installation by OEM or Mozilla’s Telemetry that captures more sophisticated details to gain hindsights about usage of Firefox. (http://arewesnappyyet.com/)

This discussion is important for the OpenStack Foundation and should be joined by users and owners of distributions alike. The objective of the session is to identify what sort of information is needed to track OpenStack adoption and what systems we need to put in place to implement.


Thursday October 18, 2012 3:20pm - 4:00pm PDT
Annie AB

4:10pm PDT

Feature deprecation workflow

The emergence and disappearance of features in OpenStack (at least nova) is pretty volatile. Zones? Diablo. No Zones but Host aggregates in Essex - but only half. Tenants vs. Projects? What is it? Until when?

With the project getting more mature I think it is now the time to have a coordinated deprecation process. We should have helper methods in the code that can easily flag deprecated stuff as such so that users will be warned that this feature/way of doing things might not last.

This does not necessarily mean that we lose dev speed. The deprecated stuff should however last at least one release.

Let's discuss.


Thursday October 18, 2012 4:10pm - 4:50pm PDT
Annie AB

4:10pm PDT

Splitting out scheduling
It would seem useful to be able to disconnect nova scheduling from nova and make it a more generic instance scheduler that gets instance information from some set of datasources and can use that information to make complicated allocation decisions (say for groups of instances and such) as a new service.

The nova-api would basically treat scheduling as a blackbox, letting whatever management system that does the scheduling be hidden behind a set of web service apis. Nova-api would then just be 'dumb' and would be responsible for telling those instances what to start (with whatever associated data) and letting nova focus on virtualization and not scheduling (which seems good).


Thursday October 18, 2012 4:10pm - 4:50pm PDT
Emma AB

5:00pm PDT

Grizzly release schedule and branch model

A discussion on how we'll organize the Grizzly release cycle, including release schedule, number of milestones, and the various freezes.

We'll also cover evolutions in the branching model, as well as versioning issues.


Thursday October 18, 2012 5:00pm - 5:40pm PDT
Annie AB
 
Filter sessions
Apply filters to sessions.