About 7 results
Open links in new tab
  1. Few-shot Learning: 2020-12-14 to 2027-01-01 - metaculus.com

    Dec 14, 2020 · How many e-prints on Few-Shot Learning will be published on arXiv over the 2020-01-01 to 2027-01-01 period? Latest estimate

  2. When will a language model with at least 100B parameters be …

    Dec 7, 2021 · Additionally, the model must at least partially match capabilities of GPT-3, especially good few-shot learning ability. Ongoing attempts at recreating GPT-3 should not be …

  3. Substance is all you need. - Metaculus

    Jan 1, 1970 · This essay was submitted to the AI Progress Essay Contest, an initiative that focused on the timing and impact of transformative artificial intelligence. You can read the …

  4. Chinese AI Beats GPT-4 on Few-Shot MMLU - metaculus.com

    May 26, 2025 · When will a Chinese entity develop a model surpassing GPT-4's few-shot performance on MMLU?

  5. AI competence in diverse fields of expertise - metaculus.com

    Jun 20, 2025 · Metaculus is an online forecasting platform and aggregation engine working to improve human reasoning and coordination on topics of global importance.

  6. MATH SOTA AI Performance - metaculus.com

    Jun 30, 2023 · These questions resolve as the highest performance achieved on MATH by June 30 in the following years by an eligible model. Eligible models may use scratch space before …

  7. 100B Param Language Model - metaculus.com

    Due to Google and OpenAI work big pre-trained language models gained recognition as a multitask and few-shot learners bringing as a step closer to general artificial intelligence. Big …