New transformer architecture can make language models faster and resource-efficient AI Least News New transformer architecture can make language models faster and resource-efficient