Information and Instructions

You can access Synthetic Cortex (Scortex) L1.5 version through this platform. This version operates by simulating the L2 version of Scortex. Due to the Scortex architecture, it works integrated with the hidden and output layers of the LLM model. However, since we currently lack the budget to fund inference servers to run large models, I have designed a simulation that provides the closest results to this process. By synchronizing with the APIs of large models, operations are performed via external protocols using logprobs, embeddings endpoints, or configuration parameters (some APIs allow access to internal layers to a certain extent, which are also included). Thus, operations are carried out on a minimal server through external connections.

In other words, the model you access here is not the actual model itself; it is a preview created to give you an idea about it. We are openly and transparently sharing the architecture of our L2 model step by step. You can follow our blog channels for more information. Quick Start

Share your test results with us:
Each tester is granted 10 - 20 input opportunities. Please report any issues, errors, or points you don’t understand during your tests.

Fill out the template below and send it to scortexinfo@gmail.com:

Is the response quality satisfactory: ?/10
Do instincts triggered in long dialogues contribute to broadening your perspective: ?/10
Compare responses with other models: ?/10
Did you encounter any frontend/design errors: ?/10
Determine your overall score: ?/10

Evaluation Notes:

Log in with Invite Code



Connect to Google





Usage Instructions:

  • Emotional effects emerge gradually. You may not immediately notice an instinctual context. In our tests, we observed that in prompts with strong emotional context, the effect appears as early as the 3rd prompt and as late as the 10th.
  • Addressing the model with simple phrases like “hello” or “hi” causes unnecessary resource consumption. Instead, we recommend testing with complex, relational, and multi-parameter questions.
  • The graphs on the left panel show the emotional oscillations generated during the model’s reasoning process. The first bar graph allows you to monitor the hormone and neurotransmitter values generated after each question. (Note: These emotional loads are specific to the model’s architecture and should not be compared to humans.)
  • The pie chart on the right panel displays the VAD analysis applied to your text. This analysis measures the emotional dominance of your words, providing the model with insights about you. These dominance values can be converted into emotional loads by an external algorithm.
  • To reduce token consumption, the episodic memory that recalls previous conversations has been disabled (short and long-term memory are inactive).
  • The model is optimized for English. If you ask questions in another language, we recommend specifying the language in the prompt.
  • Model output comes with two labels: How It Thought | Habilis McHomo
    • How It Thought | Under this label, the model’s reasoning and technical operations using emotional values are translated into human language for the user. At this stage, operations performed in external modules during the COT process are also shared, allowing the user to follow each step. In other words, the model explains its thought process in a way that humans can understand.
    • Habilis McHomo | This section contains the final output of the inferences obtained during the COT process. (Habilis is the model’s name, inspired by Homo Habilis.)