TSC Meeting Notes 2022 - OpenAMP/open-amp GitHub Wiki

Table of Contents

2022-04-12

Attended:

  • Bill Mills (Linaro vote)
  • Tammy Leino (Siemens vote)
  • Tomas Evensen (Xilinx vote)
  • Nathalie Chan King Choy
  • Dan Milea (WR vote)
  • Loic Pallardy (ST vote)
  • Ed Mooring
  • Arnaud Pouliquen
  • Andrew Wafaa (Arm vote)
  • Bruce Ashfield

Agenda:

  • Intros
  • Update on activities
  • Bill: Proposal for meta-open-amp maintainership
  • Dan, Arnaud, Bill: Proposal for progress on Virtio for OpenAMP

Decisions/conclusions

Action items

  • Nathalie: Early May System DT call: More tools-oriented (Must have: Bruce, Bill, Loic, Arnaud, Stefano, Marti)
  • Stefano: Also make sure to invite Bertrand, Luca
  • Tomas: Will discuss more with Arun

Notes

  • Recording link, view or download before it expires!
    • Passcode: 5+0xIFL=
  • We have quorum (6 of 9 members)
  • Andy Wafaa: interim Arm replacement for Grant (who became CTO of Linaro), on Yocto board
  • Quick update on OpenAMP System Reference activities: See HPP progress report March update
    • HPP is Linaro project that organizes Linaro engineering around OpenAMP
  • Bill: Proposal for meta-open-amp maintainership
    • Bill met w/ Xilinx
    • Xilinx was primary consumer of meta-openamp & was traditionally the maintainer of it. Xilinx has its own copy of OpenAMP in Xilinx GitHub org & that gets used for Xilinx SDKs. They discussed meta-openamp in OpenAMP GitHub org.
    • Propose to TSC that members for meta-openamp maintainers: Bill Mills, Mark Hatle, Ben Levinsky
      • Ben: Has been doing meta-openamp work in Xilinx SDK
      • Mark: Long-time contributor to OE
    • Ed: This is improvement over status quo
    • Bill: Previous maintainers all left Xilinx
    • Vote: No objections to the proposal
  • Dan, Arnaud, Bill: Proposal for progress on Virtio for OpenAMP
    • This is a new write-up on what we agreed to in last meeting. This document doesn't include the buffer model proposal from Bill, which is in another doc.
    • Let's expand the library that we have w/ virtio, beyond what we have for virtio for rpmsg
    • Will start w/ what's working in Dan's Zephyr work & he will incorporate Vincent's work
    • POR to move to more permissive license (requires support from Wind River). If can't, then will have to come back to discuss w/ TSC
    • Dan: Some interest from Jackson Han from Arm. Contacted Maarten about Zephyr virtio. They are working on HW support for virtio, which would need Zephyr virtio on top. Wanted to know when this would be integrated into Zephyr.
    • Arnaud: Looks good
    • Ed: No concerns
    • Any objections to moving forward with this plan? No objections.
    • Tomas: Have we discussed contractor for this?
    • Bill: Not yet. Once we get basics done, tasks that can split off will become more clear. Dan owns a lot of the first step & difficult to have someone else pick that up.
    • Dan: Agree, need to handle the initial merge first.
  • Stefano: Quick update on System DT project (21:00)
    • Next call on April 26
    • Looking to describe devices w/ different addresses (secure/non-secure resources). Came to a conclusion, very close to closing.
    • Focused on AMP configurations lately. Meant to work seamlessly for AMP or VMs or both. Will look at in next month is generation of VM device assignment configuration using Lopper. Complex b/c involves unravelling clock dependencies.
  • Bruce: Quick update on Lopper
    • Trying to do POC for what Stefano mentioned
    • Getting into more workflows to process & parse & generate more info, beyond just pruning.
      • Doing more work w/ specs
      • Taking different kinds of inputs
      • Tree manipulation
      • On PyPI
      • Using in Yocto to do Xen generation
      • Stable enough that non-Xilinx ppl are trying to use it in their workflows
    • Nordic using for MCU-MCU & secure/non-secure DT for booting w/ Zephyr
    • Xilinx using for baremetal & Linux
    • Loic: Interested for generating TF-M configuration from DT. Today, have something quite generic based on YAML, but still basic. Maybe can find a common way to work w/ Zephyr.
    • Nathalie: Early May System DT call: More tools-oriented
      • Must have: Bruce, Bill, Loic, Arnaud, Stefano, Marti
    • Bill: Do we need a System DT virtual sprint? System Reference got a good jump start with sprint in Feb, especially getting documentation on how to do specific workflows.
    • Bruce: Currently instructions not there yet. Future plans for Yocto.
    • Tomas: Need to make sure everything else around it is there (e.g. drivers in progress)
    • Tomas: Another discussion on where DT comes from: Generation vs. checked-in
    • Bill: Problem if you try to use Zephyr & Linux b/c Zephyr has own bindings for DT that are not completely compatible with Linux. How to handle that? Modify Zephyr so it can consume Linux bindings? Have Lopper generate both? Have Lopper modify symmetrically?
    • Loic: Rob indicated can have some bindings that are not used by kernel, but if defining same properties in different ways that needs discussion.
    • Bill: Think we don't know scope of problem yet.
    • Loic: Agree. Don't think there is any Zephyr DT expert who can ensure new properties are aligned with Linux.
    • Bill: First step will be to scope problem
    • Stefano: Also make sure to invite Bertrand, Luca
  • Loic: Do we need to introduce additional OpenAMP library maintainer? Arnaud is very busy & getting involved in other activities in OpenAMP.
    • Ed: Not likely to come back from retirement
    • Tomas: Will discuss more with Arun

2022-02-17 Virtio discussion

Attended

  • Tanmay Shah
  • Bill Mills
  • Ed Mooring
  • Bruce Ashfield
  • Loic Pallardy
  • Maarten Koning
  • Arnaud Pouliquen
  • Tammy Leino
  • Dan Milea
  • Vincent Guittot
  • Oleksandr Tyshchenko
  • Oleksii Moisieiev
  • Tomas Evensen
  • Stefano Stabellini
  • Nathalie Chan King Choy

Agenda

  • Dan: Virtio framework for Zephyr & how it fits with Hypervisorless virtio
  • Arnaud: Virtio implementations of OpenAMP library, Vincent & Dan
  • Discussion
  • Next steps

Decisions/conclusions

  • Bill's proposal represents both sides pretty well and should go to TSC list for further discussion. Idea: Could create new repo in OpenAMP GitHub for a virtio devices library w/ more sophisticated virtio drivers. Gives us more control than if it's under Zephyr governance.
    • Initially, it contains what Dan has already. But, WR changes the license to BSD instead of Apache.
    • Then we work through the Zephyr-specific hooks that would be needed in library or Zephyr.
    • Then, if there's optimizations to make in Zephyr services libraries & OpenAMP library, those can happen naturally as we go forward.
    • Initially, the virtio services has its own virtio layer & maybe its own virtio MMIO. Then we work to unify the bottom over some months.

Action items

  • Bill: Write up proposal and post to TSC list
  • Nathalie: Add Vincent to OpenAMP TSC list
  • Dan, Arnaud, Loic, Vincent to comment on OpenAMP TSC list message thread

Notes

  • Recording link
    • Passcode: v5M!nBg^
    • Please download sooner than later to catch up, before the recording expires
  • Dan: OpenAMP App Services virtio work
    • Link to slides
    • Goal of OpenAMP App Services working group is to enable OS-level APIs for high level applications
    • Using virtio b/c it is standard and has open specification
    • Created prototypes for Hypervisorless Virtio: Uses virtio & shared memory for transport + have virtio device drivers. Works on heterogeneous CPU clusters (e.g. big-little)
    • As part of this, we worked on standard virtio framework for Zephyr which we plan to upstream
    • Hypervisorless for Zephyr is on top of Zephyr framework. Still relies on shared memory, but uses separate notification mechanism.
    • Want help w/ implementing virtual sockets & 9p b/c integral to higher level APIs we want to offer for OpenAMP
    • Talked about Zephyr virtio framework implementation in github.com/danmilea/zephyr
    • Looked at virtio framework in Vincent's repo github.com/vingu-linaro/zephyr.git branch virtio-over-zephyr (SCMI implementation)
      • Good: Uses lib open-amp and libmetal for handling virtio device, virtqueue & reserved memory
      • Treats the other parts of the deployment as virtio back-end instead of virtio device driver
    • Where we start to align with SCMI implementation: w/ hypervisorless, we start dealing w/ same constraints (e.g. well-defined memory regions)
  • Arnaud
    • Link to slides
    • Looked at current OpenAMP library implementation, Vincent & Dan implementations from architecture POV
    • Can divide virtio protocol into 3 layers: transport, virtqueue, device
    • Current OpenAMP library implementation
      • Transport (remoteproc virtio via resource table in shared memory), virtqueue (virtio via vrings & buffers in shared memory), device (virtio rpmsg)
      • Can support guest & host mode. Linux only supports host.
      • Looked at Zephyr upstream & how the 3 parts of virtio protocol map to it.
    • Highlighted OpenAMP limitations.
    • Looked at Vincent implementation (Virtio SCMI) with Host running Zephyr + OpenAMP & Guest running Linux w/ OpenAMP
      • Firmware dedicated for SCMI server
      • Virtio to structure the vring & buffers
      • New driver virtio MMIO, mainly for signaling
      • Transport (virtio MMIO via MMIO control register & configuration space in shared memory), virtqueue (OpenAMP: virtio via vrings & buffers in shared memory), device (Virtio SCMI)
      • Looked at how maps to Zephyr (slide in the video needs some fixes - uploaded slides have the fixes)
? Vincent: Virtio-SCMI is using virtio interface to push & pop, not directly accessing virtio. Goal: Don't have to care if you are slave or master, it's behind virtio MMIO driver.
    • Virtio for kvmtool virtualization
      • Host w/ KVMTool, Guest w/ Zephyr: Transport (virtio MMIO via MMIO control register & configuration space in shared memory), virtqueue (virtio via vrings & buffers in shared memory), device (virtio drivers & virtio devices)
      • Looked at how maps to Zephyr
    • Note: Virtio MMIO is described in Virtio spec, but static vring transport layer & remoteproc virtio transport layer are not described in the spec
    • View as OpenAMP maintainer
      • Showed Zephyr mapping
      • See more as wrapper to be able to develop device on host in OpenAMP that could be reused for other OS (e.g. NuttX)
      • Important to support transport layer in OpenAMP b/c virtio MMIO is part of spec. We need to support this layer to be compliant with spec.
      • Remoteproc virtio is similar to virtio MMIO: Status & feature negotiation. Maybe could merge both transport layers if same structure.
      • If we define resource table v2, could define virtio device that matches with virtio MMIO spec & merge both.
  • Bill: Where does Dan handle bounce buffers?
    • Dan: Didn't go into details on hypervisorless b/c want to focus on virtio framework first, w/o OpenAMP dependencies. Virtio framework we contributed can be made to use Zephyr guests, regardless of OpenAMP or other dependencies. Arnaud's proposal shows improvement that includes all this technology, in an OpenAMP context rather than Zephyr standalone virtualization use case.
  • Bill: Having virtio devices only use constrained memory is seen even in presence of hypervisor. When you don't want to have Dom0 have lots of visibility into guests. e.g. Jailhouse. Like how the proposal has transition layer for working with/without that constraint. Another choice could be to write driver to include the capability to handle the constraint.
    • Dan: This is where overlap starts to be significant between API used for hypervisorless virtio & what Vincent did. API layer specific to SCMI application has code to get buffers from a well-defined memory region. Chose not to implement in the POC. It's the other side of the coin, which should be there.
    • Bill: Would like to offer both choices & let user configure what's appropriate for them.
  • Vincent: Interface at virtio level that don't care if it's SCMI implementation or not. All the interfaces I put can be used as a guest or as a device. You can use either constrained memory or any memory when you are a guest. When slave/back-end, constraint was to use a buffer in reserved memory.
    • Arnaud: In current implementation, if you are host, you can add buffers from whatever memory you want. If guest (slave), have to use buffer allocated by host.
    • Vincent: Clarifying: Linux guest is the host mastering the virtqueue in the vring
    • Bill: Virtio terminology is driver & device
  • Vincent: POC: Generic virtio interface. On top of that, if you have SCMI imlementation, you can set the feature you are supporting. You ask for cookie to be virtqueue. Push/pop requests to virtqueue. You don't know if you are driver or device of the virtqueue. Also, have get buffer for driver to get buffer in reserved memory of device. Otherwise, can provide whatever memory you want.
    • Dan: This is the API that I was referring to, that looks like it was designed to fit your application.
    • Vincent: But, it's not SCMI specific. Just push/pop. Want to abstract if you are driver or device of link.
      • I mainly want to be a device, but started tests as being driver.
  • Bill: Put API same for arbitrary or preallocated memory?
    • Vincent: Yes
    • Bill: Think we should have 2 sets of APIs, 1 for dedicated & 1 for memory from anywhere. Then can insert bounce buffer handling.
    • Vincent: This is why have get buffer. Driver uses get buffer - can be wrapped. Maybe should bounce...
    • Bill: API should be contract for how the buffers should be handled. Upper layer just needs to know the pattern it is implementing, can be device or driver, can use constrained memory or not:
      • Getting a buffer before populate & put
      • Take any buffer I have & give it
    • Vincent: My virtio SCMI doesn't know if it has constraint or not.
    • Bill: If it calls get buffer, it uses 1 set of API. If it doesn't, it uses the other set of API.
    • Vincent: If you are device/slave of link, if you do pop, you are not creating the buffer. Buffer is provided by driver & you push it back. Device never creates buffer, from virtio spec POV. Device has no control of where buffers are.
  • Vincent constraints: Memory must be small part. I used mainline Linux kernel. No changes to Linux kernel.
    • Enabled some DMA __ device (virtio MMIO) and Linux buffer must be bounced in this reserved memory.
    • Bill: Memory is small part is actually the general case, not just yours.
  • Bill: SCMI is too constrained an interface to show all the examples. Like virtio net b/c shows a lot more flexibility in how the driver could be implemented.
    • Dan: Get buffer API on top of WR's virtio implementation would transform hypervisorless virtio from dynamic layer w/ bounce buffers to more static implementation. But, that would mean changing the device drivers. So, that is the common part.
    • Bill: Yes. Let's offer both options.
      • Can choose to re-use Zephyr IP stack & live w/ bounce buffers & use this set of APIs. Virtio layer knows it has constraints & using bounce buffers.
      • Instead, can have high performance IP & replace IP stack. Want zero copy. Change IP stack to use get buffer to avoid copy.
    • Dan: That's do-able.
  • Tomas: Configuration: We've been working w/ System DT to be able to specify in 1 place the virtio channels & Lopper gives Linux or RTOS what it needs. Does that apply to both of these?
    • Vincent: I'm using normal DT node to describe virtio MMIO. If Lopper can spit System DT in 1 node for Zephyr & 1 node for Linux, no problem.
    • Arnaud: Added DT declaration in my slide (virtio_mmio)
    • Vincent: Only added device mode b/c that can support both driver or device. Specify if virtio node is device or driver of MMIO interface. Without binding, on Linux you would be driver, and on Zephyr you have to put device mode as true to act as device of MMIO interface.
    • Arnaud: Showed slide with DT configuration proposal by Dan. Can find same node for virtio MMIO, but device or drivers are defined as child of the transport node. Can define memory area for virtio MMIO.
    • Vincent: In my case, all the SCMI stuff is done by the (ST? SCP?) firmware. Just need generic virtio interface. SCMI is done in upper layer.
    • Arnaud: Could have wrapper on Zephyr for virtio to avoid exposing API at virtio MMI
  • Origin of discussion: Loic was concerned about having multiple virtio stacks in OpenAMP for Zephyr
    • Dan: What would prevent us from upstreaming the virtio framework to Zephyr & tweak it more to fit the OpenAMP centric vision? We haven't yet asked Zephyr guys if they want a virtio framework.
    • Loic: Concern that this would make it too Zephyr-specific. What you're proposing could be interesting to NuttX, FreeRTOS, Nucleus, etc. When talking to customers, it is hard for them to switch RTOS. If in OpenAMP, it would be easier to reuse in more RTOS b/c OpenAMP is an independent library they could port to their RTOS.
  • Bill: Idea: Could create new repo in OpenAMP for a virtio devices library.
    • Initially, it contains what Dan has already. But, WR changes the license to BSD instead of Apache.
    • Then we work through the Zephyr-specific hooks would be needed in library or Zephyr.
    • Then, if there's optimizations to make in Zephyr services libraries & OpenAMP library, those can happen naturally as we go forward.
    • Arnaud: Concern: would result in 2 implementations of virtuqueue layer
      • Dan: We have couple hundred lines of code for virtuqueue / vring handling. OpenAMP's is similar. Combining a few hundred lines of code is not a lot of effort & can do if needed.
    • Bill: And we have Vincent & Dan virtio-mmio
    • Vincent: Why didn't you try to use OpenAMP virtqueue interface?
      • Dan: No fantastic reason
    • Arnaud: Today, only device part is implemented in virtio core. Wonder if complexity is in driver implementation of virtio. You have to allocated buffers & fill descriptors & initialize vrings. Implemented in OpenAMP, but not in virtio by Dan.
      • Dan: We need that generic virtio back-end library that we talked about. That's what's missing. We dealt w/ that in hypervisorless mode by reusing kvmtools back-end implementation but that doesn't really hit OpenAMP.
    • Bill: The interesting use case is Zephyr-Zephyr device to driver. That shows we have all the capabilities needed.
      • Dan: So, we're back to talking about back-end implementation for Zephyr
    • Dan: Biased towards pitch for OpenAMP app services. Want to provide high level APIs on top of that: Sockets, file system access, etc. That limits options for back-ends - need complex back-end to do that.
    • Arnaud: We could start to use virtqueue layer from OpenAMP & implement virtio MMIO directly in Zephyr as first step. Today we have static vring transport layer implementation. Would match with current architecture in Zephyr. Then when we introduce virtio-mmio back-end in OpenAMP & device & driver for console, etc. we could modify driver to make as wrapper as 2nd step. Gives initially in Zephyr then plan move to OpenAMP.
      • Bill: Will be hard to tell ppl to move away from Zephyr to library
    • Bill: Imagine future will have apps we don't want in OpenAMP library. Will want those in separate repo that is add-on to OpenAMP library. Add another repo w/ more sophisticated virtio drivers. The future looks like 2 repos. We start with that now & we work out details of sharing lower half independently. Initially, the virtio services has its own virtio layer & maybe its own virtio MMIO. Then we work to unify the bottom over some months.
      • Arnaud: Will be a fork short/middle term. Concern: Difficult to come back to something unified
      • Bill: If it's 2 libs under control of OpenAMP, we can make the changes we decide to make. Concern, is if we put it under control of Zephyr governance, we'll have much less control.
      • Dan: Like Bill's approach b/c sets clear path forward & is do-able
⚠️ **GitHub.com Fallback** ⚠️