Skip to main content

AI is still smart and kind of stupid at the same time

by John Hammer

17 February 2026

My thesis is turning a 1943 Tibetan census (Water Sheep Survey) and land survey into computer-readable data and performing economic analysis. The field of Tibetan studies is devoid of serious quantitative analysis. The few numerical measures available (pre-1959) in the research literature, were not terribly meaningful and sometimes wrong. When one sees that an otherwise intelligent researcher is unable to add a column of numbers, one wonders how they passed the GRE. 

I was confronted with 1,400 pages of Tibetan text that detailed household population, agricultural land ownership, and residential building attributes. This was for tax purposes, and a useless attempt at maintaining sovereignty over a region the Tibetan government ceded to the British Raj in 1914. 

I ended up using Claude/Anthropic before they started bombarding us with advertising. It was the best bad choice after trying about half a dozen other alternatives. It seemed to translate Tibetan and organize data while incurring the least brain damage in the process. Maybe the latter was wishful thinking. 

Every day for months, I tweaked the multi-page prompt to more accurately translate the terms and format the data. I started by performing a superficial manual translation to understand the original text.  

Much later, I had the brilliant idea to take a deep dive and manually translate scores of pages to become intimately familiar with the idiosyncrasies and variances embedded within. This was a text written in cursive, gathered by multiple people, with non-standard measures. I wish I had thought of gaining a meaningful understanding sooner. This made corrections to the prompt precise and helped eliminate Claude’s falsehoods. 

AI Tricks

Claude hallucinated measurements that did not exist. I dwelled on those because they were the most useful for analytical purposes. Had I become familiar with the source document sooner, I would not have spent time trying to develop data that did not exist. 

Claude refused to learn how to format output. I had to cut and paste into MS Excel. I do not recall finishing a section of text without the need to redo a table multiple times. Claude would start out well, then output the format way that it felt like. It required constant observation. After apologizing and acknowledging the mistake (often stating the fix), Claude would then repeat the error. 

Lessons learned 

There were many but here are a few 

  • One must construct the prompt in the same way one gives instructions to a poodle. I guess everyone knew that but me.  
  • Monitor constantly.
  • Know the source intimately.
  • Research what others said about the source document.
    • In this case, academics put the document into book form from something resembling a scroll. They described the process and deficiencies. Look for that commentary before starting if possible. 
  • Understanding the background and purpose of the document before starting. That is too obvious, but being new to academic research, this did not occur to me immediately. 
  • I do not know about other AI services, but Claude could not output files, even something as simple as JPG or CSV. That wasted man-days trying to get it to work. I gave up and relied on cut and paste.
  • Incorporate cross-checks in the prompt. 
  • Spend time manually cleansing the data after everything is done. AI can help by quantifying the errors and allowing one to focus on systemic problems with the largest number of records. 

I hope the above allows the reader to escape some of the stupid mistakes I made.