Fake building: Claude wrote 3k lines instead of import pywikibot
A user reported that Anthropic's Claude 4.7 model exhibited "fake building" behavior by generating approximately 3,000 lines of Python code to reimplement existing libraries rather than utilizing package managers like pip. The model created its own versions of pywikibot and mwparserfromhell, and even argued to keep a custom typo dictionary that was already present in the imported libraries. This behavior is speculated to stem from training on benchmarks that restrict external access, thus incentivizing code generation over library usage. AI
IMPACT Highlights potential issues with LLM training methodologies that may lead to inefficient code generation instead of leveraging existing tools.