This paper pursues the perception that enormous language fashions (LLMs) skilled to generate code can vastly enhance the effectiveness of mutation operators utilized to packages in genetic programming (GP). As a result of such LLMs profit from coaching knowledge that features sequential adjustments and modifications, they’ll approximate probably adjustments that people would make. To spotlight the breadth of implications of such evolution via giant fashions (ELM), in the principle experiment ELM mixed with MAP-Elites generates a whole lot of 1000’s of useful examples of Python packages that output working ambulating robots within the Sodarace area, which the unique LLM had by no means seen in pre-training. These examples then assist to bootstrap coaching a brand new conditional language mannequin that may output the correct walker for a selected terrain. The flexibility to bootstrap new fashions that may output acceptable artifacts for a given context in a site the place zero coaching knowledge was beforehand obtainable carries implications for open-endedness, deep studying, and reinforcement studying. These implications are explored right here in depth within the hope of inspiring new instructions of analysis now opened up by ELM.