Find list of substrings in List of strings python

listStrings = [ACDE, CDDE, BPLL, ... ]listSubstrings = [ACD, BPI, KLJ, ...]

The above entries are just examples. len(listStrings) is ~ 60,000, len(listSubstrings) is ~50,000-300,000, and len(listStrings[i]) is anywhere from 10 to 30,000.

Approach 1:

for i in listSubstrings:   for j in listStrings:       if i in j:          file.write(i+j)

While this works for my task, it’s horribly slow, using one core and taking on the order of 40 minutes to complete the task

Approach 2:

For a start, I’d suggest using the Aho-Corasick string matching algorithm. Basically, in exchange for some precompute work to build a matcher object from your set of fixed strings, you can scan another string for all of those fixed strings at once, in a single pass.

So instead of scanning 60K strings 50K+ times each (three BILLION scans?!?), you can scan them each once with an only slightly higher cost than a normal single scan, and get all the hits.

The best part is, you’re not writing it yourself. PyPI (the Python package index) already has the pyahocorasick package written for you. So try it out.

Example of use:

import ahocorasicklistStrings = [ACDE, CDDE, BPLL, ...]listSubstrings = [ACD, BPI, KLJ, ...]auto = ahocorasick.Automaton()for substr in listSubstrings:    auto.add_word(substr, substr)auto.make_automaton()...for astr in listStrings:    for end_ind, found in auto.iter(astr):        w.write(found+astr)

This will write multiple times if a substring (“needle”) is found in the string being searched (“haystack”) more than once. You could change the loop to make it only write on the first hit for a given needle in a given haystack by using a set to dedup:

for astr in listStrings:    seen = set()    for end_ind, found in auto.iter(astr):        if found not in seen:            seen.add(found)            w.write(found+astr)

You can further tweak this to output the needles for a given haystack in the same order they appeared in listSubstrings (and uniquifying while you’re at it) by storing the index of the words as or with their values so you can sort hits (presumably small numbers, so sort overhead is trivial):

from future_builtins import map  # Only on Py2, for more efficient generator based mapfrom itertools import groupbyfrom operator import itemgetterauto = ahocorasick.Automaton()for i, substr in enumerate(listSubstrings):    # Store index and substr so we can recover original ordering    auto.add_word(substr, (i, substr))auto.make_automaton()...for astr in listStrings:    # Gets all hits, sorting by the index in listSubstrings, so we output hits    # in the same order we theoretically searched for them    allfound = sorted(map(itemgetter(1), auto.iter(astr)))    # Using groupby dedups already sorted inputs cheaply; the map throws away    # the index since we don't need it    for found, _ in groupby(map(itemgetter(1), allfound)):        w.write(found+astr)

For performance comparisons, I used a variant that is more likely to contain matches, as well as enlarging the haystacks. First, setup code:

>>> from random import choice, randint>>> from string import ascii_uppercase as uppercase>>> # 5000 haystacks, each 1000-5000 characters long>>> listStrings = [''.join([choice(uppercase) for i in range(randint(1000, 5000))]) for j in range(5000)]>>> # ~1000 needles (might be slightly less for dups), each 3-12 characters long>>> listSubstrings = tuple({''.join([choice(uppercase) for i in range(randint(3, 12))]) for j in range(1000)})>>> auto = ahocorasick.Automaton()>>> for needle in listSubstrings:...     auto.add_word(needle, needle)...>>> auto.make_automaton()

And now to actually test it (using ipython %timeit magic for microbenchmarks):

>>> sum(needle in haystack for haystack in listStrings for needle in listSubstrings)80279  # Will differ depending on random seed>>> sum(len(set(map(itemgetter(1), auto.iter(haystack)))) for haystack in listStrings)80279  # Same behavior after uniquifying results>>> %timeit -r5 sum(needle in haystack for haystack in listStrings for needle in listSubstrings)1 loops, best of 5: 9.79 s per loop>>> %timeit -r5 sum(len(set(map(itemgetter(1), auto.iter(haystack)))) for haystack in listStrings)1 loops, best of 5: 460 ms per loop

So for checking for ~1000 smallish strings in each of 5000 moderate size strings, pyahocorasickbeats individual membership tests by a factor of ~21x on my machine. It scales well as the size of listSubstrings increases too; when I initialized it the same way, but with 10,000 smallish strings instead of 1000, the total time required increased from ~460 ms to ~852 ms, a 1.85x time multiplier to perform 10x as many logical searches.

For the record, the time to build the automatons is trivial in this sort of context. You pay it once up front not once per haystack, and testing shows the ~1000 string automaton took ~1.4 ms to build and occupied ~277 KB of memory (above and beyond the strings themselves); the ~10000 string automaton took ~21 ms to build and occupied ~2.45 MB of memory.