python asyncio - ghdrako/doc_snipets GitHub Wiki
-
Asynchronous I/O Operations: Operations that allow the program to continue executing while waiting for I/O operations to complete, improving performance and scalability.
-
Coroutines: Special functions that can pause and resume their execution, allowing for asynchronous programming in Python. They are defined using the
async
keyword and can be paused using theawait
keyword. -
Event Loop: A mechanism that manages the execution of asynchronous tasks and handles I/O events, ensuring efficient resource utilization. The event loop is the central component of asyncio, responsible for coordinating and executing asynchronous tasks. It continuously polls for I/O events and schedules tasks for execution, ensuring efficient resource utilization and responsiveness.
-
Futures: Represent the result of asynchronous operations that have not yet completed. They allow you to track the progress of asynchronous tasks and retrieve their results once they are finished.
Asyncio function:
-
asyncio.run
is a simple function introduced in Python 3.7 that allows you to run the top-level entry point for an asyncio program. It creates and manages the event loop for you, making it easy to execute asynchronous coroutines. -
asyncio.gather
is a utility function that allows you to concurrently execute multiple coroutines and gather their results. It waits for all coroutines to complete and returns their results as a list in the same order as they were passed. -
asyncio.wait
is utility function that allows you to concurrently execute multiple coroutines, but it provides more control over the execution flow compared to ‘ asyncio.gather ‘ . It returns two sets of coroutines: one set representing tasks that completed successfully and another set representing tasks that encountered exceptions. -
asyncio.create_task
is a utility function that creates a task for a coroutine and schedules it for execution on the event loop. It is commonly used to convert coroutines into tasks and manage their execution. -
async
: Used to define asynchronous functions (coroutines), indicating that they may containawait
expressions. -
await
: Used inside async functions to pause execution until an asynchronous operationcompletes, without blocking the event loop.
Asynchronous HTTP Server in Python
import asyncio
async def handle_request(reader, writer):
data = await reader.read()
message = data.decode()
addr = writer.get_extra_info('peername')
print(f"Received request from {addr}: [message]")
# Simulate some processing time
await asyncio.sleep( 1)
response = "HTTP/1.1 200 OI<I’\r\nContent-Type: text/plain\r\n\r\nHello, world!"
writer.write(response.encode())
await writer.drain()
writer.c1ose()
async def main():
server = await asyncio.start_server(
handle_request, '1 27.0.0.1‘, 8888)
addr = server.sookets[O].getsockname()
print(f'Serving on {addr}')
async with server:
await server.serve_forever()
asyncio.run(main())
In this example, we define a coroutine 'handle_request' to process incoming HTTP requests asynchronously. We then create an asynchronous TCP server using i asyncio.start_ser"ver() ‘ and run it using ‘ asyncio.run() ‘. The server listens on port 8888 and responds with a simple "Hello, world!" message for every request.
Asynchronous HTTP Client in Python
import aiohttp
import asyncio
async def fetch(url):
try:
async with aiohttp.ClientSession() as session:
async with session.get(url) as response:
return await response.text()
except (aiohttp.C1ientError, asyncio.TimeoutError) as e:
print(f"Error fetching data from {url}: {e}")
async def main():
url = 'https://example.com'
response = await fetch(url)
print(response)
asyncio.run(main())
import asyncio
import utils
async def main():
urls = ['http://example.com‘, 'http://example.org', 'http://example.net']
for url in urls:
data = await utils.fetch_data(url)
print(f‘Response from {url}: {data}')
asyncio.run(main())
import asyncio
import aiohttp
async def fetch_data(url):
async with aiohttp.ClientSession() as session:
async with session.get(url) as response:
return await response.text()
async def main():
urls = ['http://example.com', 'http://example.org', 'http:/’/example.net']
tasks = [fetch_data(url) for url in urls]
results = await asyncio.gather(*tasks)
for url, data in zip(urls, results):
print(f'Data from [url}: {data}')
asyncio.run(main())
fetch_data(url)
is an asynchronous coroutine that fetches data from a URL using the aiohttp library. Inside the main()
coroutine, we iterate through a list of URLs and await the result of each fetch_data()
call. The "await" keyword ensures that each HTTP request is executed asynchronously, allowing for concurrent execution of multiple requests without blocking.
Asynchronous Context Managers for Resource Cleanup
# Example: Using an asynchronous context manager to manage a file resource
async def write_to_file(filename, data):
try:
async with aiofi1es.open(fi1ename, 'w‘) as f:
await f.write(data)
except aiofi1es.os.error as e:
print(f"Error writing to file {filename]: {e]")
Asynchronous TCP Server in Python
# Example: Asynchronous TCP server using asyncio
import asyncio
async def hand1e_client(reader, writer):
data = await reader.read()
message = data.decode()
addr = writer.get_extra_info('peername')
print(f"Received request from {addr}: [message]")
writer.write(data)
await writer.drain()
writer.close()
async def main():
server = await asyncio.start_server(handle_client, '1 27.0.0.1‘, 8080)
async with server:
await server.serve_forever()
asyncio.run(main())
Chaining Asynchronous Operations: Handling Multiple Requests Simultaneously
Chaining asynchronous operations involves executing multiple asynchronous tasks sequentially, where the result of one operation serves as input for the next operation. This chaining allows for efficient handling of dependencies between asynchronous tasks, enabling complex workflows to be executed concurrently.
import aiohttp
import asyncio
async def fetch_data(url):
async with aiohttp.ClientSession() as session:
async with session.get(ur1) as response:
return await response.text()
async def chain_requests(urls):
results = []
for url in urls:
data = await fetch_c1ata(url)
results.append(c1ata)
next_url = parse_data(data) # Example: Extract next URL from response data
ifnext_url:
next_c1ata = await fetch_c1ata(next_url)
results.append(next_c1ata)
return results
async def n1ain():
urls = ['http://exan1ple.con1‘, 'http://exan1ple.org', 'http://example.net']
results = await chain_requests(ur1s)
print(results)