This shows you the differences between two versions of the page.
Next revision | Previous revision | ||
networking:napi [2016/07/19 01:22] 127.0.0.1 external edit |
networking:napi [2022/10/16 16:55] (current) joser93 Missed the issues header. now fixed. |
||
---|---|---|---|
Line 11: | Line 11: | ||
=====Contents===== | =====Contents===== | ||
- | * [[https://www.linuxfoundation.org/#NAPI_Driver_design|1 NAPI Driver design]] | + | * [[#NAPI_Driver_design|1 NAPI Driver design]] |
- | * [[https://www.linuxfoundation.org/#Hardware_Architecture|1.1 Hardware Architecture]] | + | * [[#Hardware_Architecture|1.1 Hardware Architecture]] |
- | * [[https://www.linuxfoundation.org/#Locking_rules_and_environmental_guarantees|1.2 Locking rules and environmental guarantees]] | + | * [[#Locking_rules_and_environmental_guarantees|1.2 Locking rules and environmental guarantees]] |
- | * [[https://www.linuxfoundation.org/#NAPI_API|2 NAPI API]] | + | * [[#NAPI_API|2 NAPI API]] |
- | * [[https://www.linuxfoundation.org/#Advantages|3 Advantages]] | + | * [[#Advantages|3 Advantages]] |
- | * [[https://www.linuxfoundation.org/#Performance_under_high_packet_load|3.1 Performance under high packet load]] | + | * [[#Performance_under_high_packet_load|3.1 Performance under high packet load]] |
- | * [[https://www.linuxfoundation.org/#Use_of_softirq_for_other_optimizations|3.2 Use of softirq for other optimizations]] | + | * [[#Use_of_softirq_for_other_optimizations|3.2 Use of softirq for other optimizations]] |
- | * [[https://www.linuxfoundation.org/#Hardware_Flow_control|3.3 Hardware Flow control]] | + | * [[#Hardware_Flow_control|3.3 Hardware Flow control]] |
- | * [[https://www.linuxfoundation.org/#Disadvantages|4 Disadvantages]] | + | * [[#Disadvantages|4 Disadvantages]] |
- | * [[https://www.linuxfoundation.org/#Latency|4.1 Latency]] | + | * [[#Latency|4.1 Latency]] |
- | * [[https://www.linuxfoundation.org/#IRQ_masking|4.2 IRQ masking]] | + | * [[#IRQ_masking|4.2 IRQ masking]] |
- | * [[https://www.linuxfoundation.org/#Issues|5 Issues]] | + | * [[#Issues|5 Issues]] |
- | * [[https://www.linuxfoundation.org/#IRQ_race_a.k.a_rotting_packet|5.1 IRQ race a.k.a rotting packet]] | + | * [[#IRQ_race_a.k.a_rotting_packet|5.1 IRQ race a.k.a rotting packet]] |
- | * [[https://www.linuxfoundation.org/#IRQ_mask_and_level-triggered|5.1.1 IRQ mask and level-triggered]] | + | * [[#IRQ_mask_and_level-triggered|5.1.1 IRQ mask and level-triggered]] |
- | * [[https://www.linuxfoundation.org/#non-level_sensitive_IRQs|5.1.2 non-level sensitive IRQs]] | + | * [[#non-level_sensitive_IRQs|5.1.2 non-level sensitive IRQs]] |
- | * [[https://www.linuxfoundation.org/#Scheduling_issues|5.2 Scheduling issues]] | + | * [[#Scheduling_issues|5.2 Scheduling issues]] |
- | * [[https://www.linuxfoundation.org/#External_Links|6 External Links]] | + | * [[#External_Links|6 External Links]] |
=====NAPI Driver design===== | =====NAPI Driver design===== | ||
Line 84: | Line 84: | ||
* what is known as Clear-on-read (COR). When you read the status/event register, it clears everything! The natsemi and sunbmac NICs are known to do this. In this case your only choice is to move all to napi->poll() | * what is known as Clear-on-read (COR). When you read the status/event register, it clears everything! The natsemi and sunbmac NICs are known to do this. In this case your only choice is to move all to napi->poll() | ||
* Clear-on-write (COW) | * Clear-on-write (COW) | ||
- | * you clear the status by writing a 1 in the bit-location you want. These are the majority of the NICs and work the best with NAPI. Put only receive events in napi->poll(); leave the rest in the old interrupt handler. | + | * you clear the status by [[https://www.linkedin.com/company/redgage-llc | writing]] a 1 in the bit-location you want. These are the majority of the NICs and work the best with NAPI. Put only receive events in napi->poll(); leave the rest in the old interrupt handler. |
* whatever you write in the status register clears every thing. | * whatever you write in the status register clears every thing. | ||
Line 95: | Line 95: | ||
* Only one CPU at any time can call napi->poll(); for each ''napi_struct'' this is because only one CPU can pick the initial interrupt and hence the initial //napi_schedule(napi)// | * Only one CPU at any time can call napi->poll(); for each ''napi_struct'' this is because only one CPU can pick the initial interrupt and hence the initial //napi_schedule(napi)// | ||
- | * The core layer invokes devices to send packets in a round robin format. This implies receive is totaly lockless because of the guarantee only that one CPU is executing it. | + | * The core layer invokes devices to send packets in a round robin format. This implies receive is totaly lockless because of the guarantee that only one CPU is executing it. |
* Contention can only be the result of some other CPU accessing the rx ring. This happens only in close() and suspend() (when these methods try to clean the rx ring); Driver authors need not worry about this; synchronization is taken care for them by the top net layer. | * Contention can only be the result of some other CPU accessing the rx ring. This happens only in close() and suspend() (when these methods try to clean the rx ring); Driver authors need not worry about this; synchronization is taken care for them by the top net layer. | ||
Line 111: | Line 111: | ||
* **napi_schedule_prep(napi) ** | * **napi_schedule_prep(napi) ** | ||
* puts ''napi'' in a state ready to be added to the CPU polling list if it is up and running. You can look at this as the first half of //napi_schedule(napi)//. | * puts ''napi'' in a state ready to be added to the CPU polling list if it is up and running. You can look at this as the first half of //napi_schedule(napi)//. | ||
- | * **__napi_schedule(napi) ** | + | * **<nowiki>__napi_schedule(napi)</nowiki>** |
- | * Add ''napi'' to the poll list for this CPU; assuming that //napi_prep(napi)// has already been called and returned 1 | + | * Add ''napi'' to the poll list for this CPU; assuming that //napi_schedule_prep(napi)// has already been called and returned 1 |
* **napi_reschedule(napi) ** | * **napi_reschedule(napi) ** | ||
* Called to reschedule polling for ''napi'' specifically for some deficient hardware. | * Called to reschedule polling for ''napi'' specifically for some deficient hardware. | ||
* **napi_complete(napi) ** | * **napi_complete(napi) ** | ||
- | * Remove ''napi'' from the CPU poll list: it must be in the poll list on current cpu. This primitive is called by napi->poll(), when it completes its work. The structure cannot be out of poll list at this call, if it is then clearly it is a BUG(). | + | * Remove ''napi'' from the CPU poll list: it must be in the poll list on current cpu. This primitive is called by //<nowiki>napi->poll()</nowiki>//, when it completes its work. The structure cannot be out of poll list at this call, if it is then clearly it is a BUG(). |
- | * **__napi_complete(napi) ** | + | * **<nowiki>__napi_complete(napi)</nowiki>** |
- | * same as **napi_complete** but called when local interrupts are already disabled. | + | * same as //napi_complete// but called when local interrupts are already disabled. |
* **napi_disable(napi)** | * **napi_disable(napi)** | ||
* Temporarily disables ''napi'' structure from being polled. May sleep if it is currently being polled | * Temporarily disables ''napi'' structure from being polled. May sleep if it is currently being polled |