Concurrency methods in Python: Loops and Thread Pools

Python has a number of ways to do the same repetitive job. As you would expect, different methods have different advantages and disadvantages. This article is the first in a series of articles each illustrating two methods of processing multiple requests with varying amounts and types of concurrency. In this article, I’ll cover simple loops and thread pools.

Each of the programs in this series performs the same task. The task I’ve chosen for these articles is simple: determine if a given IP address supports telnet or ssh (yes, some things still support telnet). It’s kind of an odd task, but it makes a reasonable sample problem for these articles. Let’s start with a Python function that can tell you if a given port is open:

from typing import Tuple, List, Generator
import socket
PortStatus = Tuple[str, int, bool] # Our result type
def check_a_port(ip_or_name: str, port: int) -> PortStatus
        sock = socket.create_connection((ip_or_name, port), timeout=10)
        return ip_or_name, port, True
    except (TimeoutError, OSError):
        return ip_or_name, port, False

The function check_a_port is a pretty simple function – it returns its parameters along with True or False depending on whether it was able to connect to the requested port.

Doing It Serially

If you want to check several IP:port combinations, call it in loop like this:

def check_ports(ip_ports: List[Tuple[str, int]]) -> Generator[PortStatus]:
    for ip, port in ip_ports:
        yield check_a_port(ip, port)

I chose to have check_ports return a generator rather than a list, because connection attempts can take a longish time to complete. To use it, call check_ports like this:

print(list(check_ports(("", 22), ("", 23), ("", 23)]))

This is OK, if you don’t mind it taking up to 10 seconds for each connection attempt. But if you want to check on hundreds or thousands of IP:port combinations, then this is going to get really tedious because they’re all run one after the other. The traditional simple answer to this is to do the work in threads – so that the work runs in parallel. This is a reasonable answer, but I specifically recommend thread pools because thread pools are a higher level thread-based construct perfect for this use case.

Dive Into Thread Pools

A thread pool is a limited collection of worker threads. Each thread pulls work out of the queue of work submitted and does it. Each thread does its work one job at a time.

Thread pools are a limited collection of threads that are able to do potentially unbounded work while consuming only bounded resources (threads and memory). That is, it runs a set of worker threads up to the size of the thread pool. Each time a thread would become idle, it looks for more work to do, and does it. Creating and using a thread pool is simple:

from concurrent.futures import ThreadPoolExecutor, as_completed
def check_ports(ip_ports: List[Tuple[str, int]]) -> Generator[PortStatus]
    pool = ThreadPoolExecutor(100)
    tasks = [pool.submit(check_a_port, ip, port) for ip, port in ip_ports]
    for work in as_completed(tasks):
        yield work.result()

Although this is pretty straightforward once you get the hang of it, it bears a little explanation. Here are a few things that seem worth mentioning:

  1. The argument to ThreadPoolExecutor is the maximum number of threads to create.
  2. pool.submit() is called with a function and its parameters. When a worker in the thread pool gets around to performing the work, it will call that function with those parameters.
  3. The return type of pool.submit() is a Future object which will provide the potential value when it completes.
  4. concurrent.futures.as_completed() is pretty cool. It’s an iterator which returns your Futures as they complete. You can then check the result() without blocking on results from as_completed.
  5. Calling result() on a Future object returns the result of the threaded computation. If you call it before it completes, you will block waiting for the result.
  6. The size of the thread pool determines the maximum parallelism that you’re going to get in performing these computations. For tasks like these, you should size your thread pool taking into account things like the maximum number of open file descriptors your process is permitted, and other available system resources.

Disadvantages of Thread Pools

Although this was simple to write, and relatively simple to understand (which are big virtues in my book), there are some disadvantages to this approach as well:

  • Python threads become real OS threads. On Linux, OS threads are approximately as expensive as processes. So, they’re reasonably expensive (or maybe it’s that Linux processes are cheap).
  • Because of CPython’s infamous Global Interpreter Lock (GIL), you can never do more than one core’s work with a single process. This is true in spite of the fact that Python threads are real OS threads. How this plays out: If you have a blazing hot 24-core processor and are running only this code, what you’ll observe is that no matter how many threads you create, each core is running 1/24 busy or less.
  • Because of contention over the GIL, eventually you reach a number of threads where throughput no longer goes up – but levels off and goes down. This optimum number of threads is very application dependent.

In subsequent articles, I’ll present other Python parallelism methods. Some of these methods run faster, some run with lower overhead and some faster with lower overhead.

These techniques for parallelism will include:

  • Processes instead of thread pools
  • Processes and thread pools
  • Asyncio event loops
  • Asyncio event loops and processes.

Stay tuned for the next installation!

Please note: I reserve the right to delete comments that are offensive or off-topic.

Leave a Reply

You have to agree to the comment policy.

2 thoughts on “Concurrency methods in Python: Loops and Thread Pools