Epstein Files Full PDF

CLICK HERE
Technopedia Center
PMB University Brochure
Faculty of Engineering and Computer Science
S1 Informatics S1 Information Systems S1 Information Technology S1 Computer Engineering S1 Electrical Engineering S1 Civil Engineering

faculty of Economics and Business
S1 Management S1 Accountancy

Faculty of Letters and Educational Sciences
S1 English literature S1 English language education S1 Mathematics education S1 Sports Education
teknopedia

  • Registerasi
  • Brosur UTI
  • Kip Scholarship Information
  • Performance
Flag Counter
  1. World Encyclopedia
  2. Feedback neural network - Wikipedia
Feedback neural network - Wikipedia
From Wikipedia, the free encyclopedia
(Redirected from Reflection (artificial intelligence))
Technique in artificial intelligence
This article is written like a personal reflection, personal essay, or argumentative essay that states a Wikipedia editor's personal feelings or presents an original argument about a topic. Please help improve it by rewriting it in an encyclopedic style. (March 2025) (Learn how and when to remove this message)
Some of this article's listed sources may not be reliable. Please help improve this article by looking for better, more reliable sources. Unreliable citations may be challenged and removed. (March 2025) (Learn how and when to remove this message)

Feedback neural networks are neural networks with the ability to provide bottom-up and top-down design feedback to their input or previous layers, based on their outputs or subsequent layers. This is notably used in large language models, specifically in reasoning language models (RLM). This process is designed to mimic self-assessment and internal deliberation, aiming to minimize errors (like hallucinations) and increase interpretability. This reflection is a form of "test-time compute", where additional computational resources are used during inference.

Introduction

[edit]

Traditional neural networks process inputs in a feedforward manner, generating outputs in a single pass. However, their limitations in handling complex tasks, and especially compositional ones, have led to the development of methods that simulate internal deliberation. Techniques such as chain-of-thought prompting encourage models to generate intermediate reasoning steps, thereby improving their performance in such tasks.

The feedback can take place either after a full network pass and decoding to tokens, or continuously in latent space (the last layer can be fed back to the first layer).[1][2] In LLMs, special tokens can mark the beginning and end of reflection before producing a final response (e.g., <thinking>).

This internal process of "thinking" about the steps leading to an answer is designed to be analogous to human metacognition or "thinking about thinking". It helps AI systems approach tasks that require multi-step reasoning, planning, and logical thought.

Techniques

[edit]

Increasing the length of the Chain-of-Thought reasoning process, by passing the output of the model back to its input and doing multiple network passes, increases inference-time scaling.[3] Reinforcement learning frameworks have also been used to steer the Chain-of-Thought. One example is Group Relative Policy Optimization (GRPO), used in DeepSeek-R1,[4] a variant of policy gradient methods that eliminates the need for a separate "critic" model by normalizing rewards within a group of generated outputs, reducing computational cost. Simple techniques like "budget forcing" (forcing the model to continue generating reasoning steps) have also proven effective in improving performance.[5]

Types of reflection

[edit]

Post-hoc reflection

[edit]

Analyzes and critiques an initial output separately, often involving prompting the model to identify errors or suggest improvements after generating a response. The Reflexion framework follows this approach.[6]

Iterative reflection

[edit]

Revises earlier parts of a response dynamically during generation. Self-monitoring mechanisms allow the model to adjust reasoning as it progresses. Methods like Tree-of-Thoughts exemplify this, enabling backtracking and alternative exploration.

Intrinsic reflection

[edit]

Integrates self-monitoring directly into the model architecture rather than relying solely on external prompts, enabling models with inherent awareness of their reasoning limitations and uncertainties. This has been used by Google DeepMind in a technique called Self-Correction via Reinforcement Learning (SCoRe) which rewards the model for improving its responses.[7]

Process reward models and limitations

[edit]

Early research explored PRMs to provide feedback on each reasoning step, unlike traditional reinforcement learning which rewards only the final outcome. However, PRMs have faced challenges, including computational cost and reward hacking. DeepSeek-R1's developers found them to be not beneficial.[8][9]

See also

[edit]
  • Reflective programming
  • Reservoir computing

References

[edit]
  1. ^ Geiping, Jonas; McLeish, Sean; Jain, Neel; Kirchenbauer, John; Singh, Siddharth; Bartoldson, Brian R.; Kailkhura, Bhavya; Bhatele, Abhinav; Goldstein, Tom (2025). "Scaling up Test-Time Compute with Latent Reasoning: A Recurrent Depth Approach". arXiv:2502.05171 [cs.LG].
  2. ^ Hao, Shibo; Sukhbaatar, Sainbayar; Su, DiJia; Li, Xian; Hu, Zhiting; Weston, Jason; Tian, Yuandong (2024). "Training Large Language Models to Reason in a Continuous Latent Space". arXiv:2412.06769 [cs.CL].
  3. ^ DeepSeek-AI; et al. (2025). "DeepSeek-R1: Incentivizing Reasoning Capability in LLMS via Reinforcement Learning". arXiv:2501.12948 [cs.CL].
  4. ^ Shao, Zhihong; Wang, Peiyi; Zhu, Qihao; Xu, Runxin; Song, Junxiao; Bi, Xiao; Zhang, Haowei; Zhang, Mingchuan; Li, Y. K.; Wu, Y.; Guo, Daya (2024). "DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models". arXiv:2402.03300 [cs.CL].
  5. ^ Muennighoff, Niklas; Yang, Zitong; Shi, Weijia; Xiang Lisa Li; Fei-Fei, Li; Hajishirzi, Hannaneh; Zettlemoyer, Luke; Liang, Percy; Candès, Emmanuel; Hashimoto, Tatsunori (2025). "S1: Simple test-time scaling". arXiv:2501.19393 [cs.CL].
  6. ^ Shinn, Noah; Cassano, Federico; Berman, Edward; Gopinath, Ashwin; Narasimhan, Karthik; Yao, Shunyu (2023). "Reflexion: Language Agents with Verbal Reinforcement Learning". arXiv:2303.11366 [cs.AI].
  7. ^ Dickson, Ben (1 October 2024). "DeepMind's SCoRe shows LLMs can use their internal knowledge to correct their mistakes". VentureBeat. Retrieved 20 February 2025.
  8. ^ Uesato, Jonathan; Kushman, Nate; Kumar, Ramana; Song, Francis; Siegel, Noah; Wang, Lisa; Creswell, Antonia; Irving, Geoffrey; Higgins, Irina (2022). "Solving math word problems with process- and outcome-based feedback". arXiv:2211.14275 [cs.LG].
  9. ^ Lightman, Hunter; Kosaraju, Vineet; Burda, Yura; Edwards, Harri; Baker, Bowen; Lee, Teddy; Leike, Jan; Schulman, John; Sutskever, Ilya; Cobbe, Karl (2023). "Let's Verify Step by Step". arXiv:2305.20050 [cs.LG].
  • v
  • t
  • e
Artificial intelligence (AI)
  • History
    • timeline
  • Glossary
  • Companies
  • Projects
Concepts
  • Parameter
    • Hyperparameter
  • Loss functions
  • Regression
    • Bias–variance tradeoff
    • Double descent
    • Overfitting
  • Clustering
  • Gradient descent
    • SGD
    • Quasi-Newton method
    • Conjugate gradient method
  • Backpropagation
  • Attention
  • Convolution
  • Normalization
    • Batchnorm
  • Activation
    • Softmax
    • Sigmoid
    • Rectifier
  • Gating
  • Weight initialization
  • Regularization
  • Datasets
    • Augmentation
  • Prompt engineering
  • Reinforcement learning
    • Q-learning
    • SARSA
    • Imitation
    • Policy gradient
  • Diffusion
  • Latent diffusion model
  • Autoregression
  • Adversary
  • RAG
  • Uncanny valley
  • RLHF
  • Self-supervised learning
  • Reflection
  • Recursive self-improvement
  • Hallucination
  • Word embedding
  • Vibe coding
Applications
  • Machine learning
    • In-context learning
  • Artificial neural network
    • Deep learning
  • Language model
    • Large
    • NMT
    • Reasoning
  • Model Context Protocol
  • Intelligent agent
  • Artificial human companion
  • Humanity's Last Exam
  • Lethal autonomous weapons (LAWs)
  • Generative artificial intelligence (GenAI)
  • (Hypothetical: Artificial general intelligence (AGI))
  • (Hypothetical: Artificial superintelligence (ASI))
  • Agent2Agent protocol
Implementations
Audio–visual
  • AlexNet
  • WaveNet
  • Human image synthesis
  • HWR
  • OCR
  • Computer vision
  • Speech synthesis
    • 15.ai
    • ElevenLabs
  • Speech recognition
    • Whisper
  • Facial recognition
  • AlphaFold
  • Text-to-image models
    • Aurora
    • DALL-E
    • Firefly
    • Flux
    • GPT Image
    • Ideogram
    • Imagen
    • Midjourney
    • Recraft
    • Stable Diffusion
  • Text-to-video models
    • Dream Machine
    • Runway Gen
    • Hailuo AI
    • Kling
    • Sora
    • Seedance
    • Veo
  • Music generation
    • Riffusion
    • Suno AI
    • Udio
Text
  • Word2vec
  • Seq2seq
  • GloVe
  • BERT
  • T5
  • Llama
  • Chinchilla AI
  • PaLM
  • GPT
    • 1
    • 2
    • 3
    • J
    • ChatGPT
    • 4
    • 4o
    • o1
    • o3
    • 4.5
    • 4.1
    • o4-mini
    • 5
    • 5.1
    • 5.2
  • Claude
  • Gemini
    • Gemini (language model)
    • Gemma
  • Grok
  • LaMDA
  • BLOOM
  • DBRX
  • Project Debater
  • IBM Watson
  • IBM Watsonx
  • Granite
  • PanGu-Σ
  • DeepSeek
  • Qwen
Decisional
  • AlphaGo
  • AlphaZero
  • OpenAI Five
  • Self-driving car
  • MuZero
  • Action selection
    • AutoGPT
  • Robot control
People
  • Alan Turing
  • Warren Sturgis McCulloch
  • Walter Pitts
  • John von Neumann
  • Christopher D. Manning
  • Claude Shannon
  • Shun'ichi Amari
  • Kunihiko Fukushima
  • Takeo Kanade
  • Marvin Minsky
  • John McCarthy
  • Nathaniel Rochester
  • Allen Newell
  • Cliff Shaw
  • Herbert A. Simon
  • Oliver Selfridge
  • Frank Rosenblatt
  • Bernard Widrow
  • Joseph Weizenbaum
  • Seymour Papert
  • Seppo Linnainmaa
  • Paul Werbos
  • Geoffrey Hinton
  • John Hopfield
  • Jürgen Schmidhuber
  • Yann LeCun
  • Yoshua Bengio
  • Lotfi A. Zadeh
  • Stephen Grossberg
  • Alex Graves
  • James Goodnight
  • Andrew Ng
  • Fei-Fei Li
  • Alex Krizhevsky
  • Ilya Sutskever
  • Oriol Vinyals
  • Quoc V. Le
  • Ian Goodfellow
  • Demis Hassabis
  • David Silver
  • Andrej Karpathy
  • Ashish Vaswani
  • Noam Shazeer
  • Aidan Gomez
  • John Schulman
  • Mustafa Suleyman
  • Jan Leike
  • Daniel Kokotajlo
  • François Chollet
Architectures
  • Neural Turing machine
  • Differentiable neural computer
  • Transformer
    • Vision transformer (ViT)
  • Recurrent neural network (RNN)
  • Long short-term memory (LSTM)
  • Gated recurrent unit (GRU)
  • Echo state network
  • Multilayer perceptron (MLP)
  • Convolutional neural network (CNN)
  • Residual neural network (RNN)
  • Highway network
  • Mamba
  • Autoencoder
  • Variational autoencoder (VAE)
  • Generative adversarial network (GAN)
  • Graph neural network (GNN)
Political
  • AI safety (Alignment)
  • Ethics of AI
  • EU AI Act
  • Precautionary principle
  • Regulation of AI
  • Virtual politician
Social and economic
  • AI boom
  • AI bubble
  • AI literacy
  • AI slop
  • AI veganism
  • AI winter
  • Anthropomorphism
  • In architecture
  • In education
  • In healthcare
    • Chatbot psychosis
    • Mental health
  • In visual art
  • Category
Retrieved from "https://teknopedia.ac.id/w/index.php?title=Feedback_neural_network&oldid=1340111188"
Categories:
  • Artificial intelligence
  • Large language models
Hidden categories:
  • Articles with short description
  • Short description matches Wikidata
  • Wikipedia articles with style issues from March 2025
  • All articles with style issues
  • Articles lacking reliable references from March 2025
  • All articles lacking reliable references

  • indonesia
  • Polski
  • العربية
  • Deutsch
  • English
  • Español
  • Français
  • Italiano
  • مصرى
  • Nederlands
  • 日本語
  • Português
  • Sinugboanong Binisaya
  • Svenska
  • Українська
  • Tiếng Việt
  • Winaray
  • 中文
  • Русский
Sunting pranala
url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url
Pusat Layanan

UNIVERSITAS TEKNOKRAT INDONESIA | ASEAN's Best Private University
Jl. ZA. Pagar Alam No.9 -11, Labuhan Ratu, Kec. Kedaton, Kota Bandar Lampung, Lampung 35132
Phone: (0721) 702022
Email: pmb@teknokrat.ac.id