Outsider Theorists, Generative AI and the Participation Threshold
Abstract
This discussion paper examines how generative AI may alter who is able to participate in theoretical work. Instead of asking whether AI will replace experts, it adopts a narrower focus: whether large language models help non academic authors cross the practical threshold from unstructured ideas to manuscripts that are coherent enough for a field to critically assess.
Using a stylised comparison between an academic theorist and an outsider with the same idea and access to the same AI tools, the paper argues that AI may act as a leveller at the participation threshold without equalising expertise or institutional advantage. Outsiders gain scaffolding that was previously inaccessible, while insiders gain acceleration. The paper also explores the risks of homogenisation, the ways AI acts as a mediating translator rather than a neutral bridge, and the tension between enabling legibility and constraining originality.
Finally, the paper evaluates how restrictive policies toward AI assisted writing risk disproportionately disadvantaging those who rely on AI to reach the starting line. The aim is not to resolve these issues but to prompt discussion about how AI, outsider participation and publication policy interact at the boundary of examinability.
- Introduction
Current debates about generative AI and scholarship often focus on whether large language models will undermine expertise, erode trust or flood journals with synthetic manuscripts. This paper addresses a more specific question:
What changes when outsider theorists gain access to tools that help them turn an informal idea into something that looks like a theory paper?
By “outsider theorist” I mean a motivated non academic who lacks recognised training or institutional affiliation in a field, but is capable of sustained conceptual work. This group includes autodidacts, interdisciplinary researchers entering new fields, industry practitioners developing conceptual frameworks relevant to their domain and citizen scientists attempting theoretical contributions. Their backgrounds differ, but they share a common barrier: difficulty translating a private conceptual structure into a manuscript that is legible within disciplinary norms.
This threshold relates to broader concerns in science and technology studies about exclusion from knowledge production. Work on expert–lay boundaries (Collins and Evans), standpoint epistemology and epistemic injustice (Fricker; Harding) and cognitive justice (Visvanathan) shows how structural conditions influence whose ideas enter circulation. Generative AI adds a new sociotechnical mediator that may alter access to the earliest stages of theory formation.
Large language models can now assist with importing terminology, proposing simple formal or empirical structures, suggesting argument templates, identifying missing components and surfacing inconsistencies during iterative revision. These scaffolds do not turn outsiders into experts. They may, however, lower the threshold between having an idea and producing a manuscript that is examinable.
This paper develops three claims:
-
Generative AI can help outsider theorists cross a participation threshold that previously required insider training or mentoring.
-
This effect is strongest at the edges of idea space, where outsider freedom interacts with AI’s ability to translate conceptual work into legible form.
-
Restrictive policies on AI use risk disproportionately harming those who depend on AI for basic scaffolding, while leaving high-status, invisible human assistance unchallenged.
Lowering the threshold also increases variance. More promising outsider manuscripts may appear, but so may more low-quality high-effort submissions. The argument here concerns access to examinability, not the average quality of submissions.
- A Stylised Comparison: Academic and Outsider with the Same Idea
To illustrate how generative AI shifts the participation threshold, consider a stylised scenario where an academic theorist and an outsider theorist independently develop the same conceptual idea. Both aim to turn it into a theory paper. Both have access to a capable large language model. The question is how their starting positions, background constraints and interactions with AI influence their ability to produce a manuscript that specialists can examine.
2.1 Without AI
An academic entering this task benefits from several forms of tacit infrastructure. They know the established debates into which an idea must be situated, can identify which terms carry specific methodological or philosophical commitments and have internalised the genre conventions of theory writing. Even before drafting begins, the academic already understands what counts as a contribution, how arguments must be scaffolded and where anticipated objections may arise. Feedback channels reinforce this: colleagues, supervisors or peer networks can signal weaknesses early in the process.
The outsider begins with the same idea but without this scaffolding. They may lack the vocabulary needed to express it in a recognisable way, may not know which concepts are foundational and which are peripheral, and may be unaware of the expectations governing structure, evidence or argumentation. Their early drafts often reflect personal logic rather than disciplinary form, making it difficult for specialists to assess the idea on its merits. Many outsider manuscripts fail not because the idea is incoherent but because the presentation is illegible to insiders.
2.2 With AI
With access to a large language model, both theorists can request definitions, structural outlines, conceptual framings and suggestions for argumentative development. They can ask the model to propose different ways of expressing the idea, identify gaps and offer alternative approaches. However, the nature of the benefit differs sharply between the two authors.
For the academic, AI mostly accelerates familiar processes. It reduces time spent drafting conventional sections, speeds up the articulation of known structures and helps surface objections that they likely could have anticipated. The model extends their existing competence.
For the outsider, AI does not simply accelerate the process; it changes the nature of the task. The model supplies terminology the outsider has never encountered, outlines that make the idea legible to a field they do not fully understand and objections they would not have anticipated. In effect, AI imports discipline-shaped scaffolding that outsiders previously lacked. The outsider’s conceptual core remains their own, but the framing becomes recognisable to specialists.
2.3 Iterative Co-Writing
Both authors are likely to use AI iteratively: drafting, critiquing the output, rejecting irrelevant suggestions and refining their own framing. This iterative loop matters. It distinguishes accountable co-writing from uncritical generation. The author retains intellectual control; the model proposes structures and language that can be accepted, rejected or reshaped.
For the academic, iterative co-writing allows faster convergence toward a polished manuscript. For the outsider, it becomes a form of surrogate mentorship. The model’s suggestions effectively simulate the feedback an insider might receive from colleagues. The outsider learns the expected shape of arguments and can refine the expression of their idea until it reaches basic coherence.
2.4 The Participation Threshold
The participation threshold sits between legibility and examinability. A draft crosses this threshold when its structure, vocabulary and claims are clear enough that an informed reader can evaluate it, critique it and situate it within a field’s conceptual landscape.
Before AI, outsiders often struggled to reach this threshold because their drafts failed to adopt the conventions required for evaluation. Their ideas remained private or unpublishable, not because they lacked merit but because they lacked legible form.
With AI, some outsiders can now cross this threshold. Their ideas become expressible in a format that specialists can read, interrogate and challenge. This does not equalise expertise; it equalises access to the point where expertise can begin its work.
The stylised comparison therefore highlights the paper’s central argument: AI does not flatten the intellectual landscape, but it may widen the entrance gate.
- Outsider Exploration, Edges of Idea Space and AI as a Mediating Translator
3.1 Academic Constraints
Academic theorists develop ideas within well-defined intellectual ecosystems. Disciplinary templates, methodological defaults, citation expectations and reputational incentives shape how theories are framed and evaluated. These norms promote rigour and coherence, but they also channel attention toward familiar problem structures and acceptable argumentative forms. The result is a constrained search space: much is possible, but not everything is permissible without professional cost. This context is essential for understanding why outsiders and insiders may use generative AI differently.
3.2 Outsider Freedom
Outsider theorists are not bound to disciplinary expectations in the same way. They may develop ideas that cross fields, mix incompatible vocabularies or pursue lines of thought that an academic would avoid because they appear unorthodox or insufficiently grounded. This freedom can produce conceptual noise, but it also allows exploration in parts of idea space that institutional actors rarely enter. Outsiders occupy a different cognitive landscape, and it is at these edges that AI scaffolding may be most consequential.
3.3 AI as Mediating Translator
Generative AI does not simply help outsiders write more clearly; it reframes ideas through the conventions embedded in its training data. When an outsider requests a structure, definition or argumentative path, the model generates options that reflect dominant academic patterns. This translation helps turn unconventional concepts into something specialists can evaluate, but it also shapes expression by smoothing idiosyncrasies, importing formal vocabularies and aligning the text with expectations of scholarly discourse. AI thus functions not as a neutral bridge but as a mediating translator. In this respect, large language models resemble the “boundary objects” and sociotechnical mediators described in STS: tools that sit between communities and shape how ideas cross conceptual borders.
3.4 Homogenisation Risk
Because large language models reproduce mainstream patterns, they may unintentionally suppress some of the diversity that outsider exploration introduces. Unconventional views may be normalised into familiar structures, or distinctive conceptual moves may be softened to align with standard analytic grooves. This creates a tension: the same scaffolding that enables outsiders to reach examinability may constrain originality. At the same time, skilled users can deliberately prompt the model to offer divergent framings or conflicting interpretations, suggesting homogenisation is a tendency rather than a fixed outcome. In practice, critical co-writing is often the main defence against conceptual smoothing, and the degree of homogenisation may depend as much on user skill as on the model itself.
- AI Assistance, Distributed Labour and Review Design
4.1 Who Depends on AI?
Generative AI assists different groups in different ways. For academically trained theorists, it is primarily an accelerator. For outsider theorists, ESL authors and those without institutional support, it may be essential scaffolding. Restrictive interpretations of AI use risk exaggerating this asymmetry. If visible machine assistance is penalised while invisible human assistance remains accepted, those with the least structural support are disadvantaged.
4.2 Distributed Labour and Legitimacy
Scholarly writing is already a distributed activity. Authors routinely rely on colleagues for feedback, research assistants for literature sorting, writing centres for editing support and professional editors for refinement. Authorship rests on judgement and responsibility, not on producing every sentence manually.
AI-assisted writing fits within this existing ecosystem. Through iterative co-writing, AI provides immediate structural feedback and alternate framings, but the author determines the argument and accepts accountability. Treating AI-supported outsiders as inherently suspect therefore creates a new inequity: insiders benefit from high-status human assistance, while outsiders are penalised for using an accessible tool that performs a similar structural role.
4.3 Process-Focused Heuristics for Review
Peer review can maintain academic integrity without policing tool usage. Journals could request concise statements explaining how AI contributed to a manuscript. Reviewers could assess whether authors demonstrate conceptual understanding, coherence and accountability. Manuscripts that rely on uncritical, generic AI output will typically reveal this through shallow reasoning or mismatched citations. Those developed through critical co-writing will show coherence, deliberate structuring and clear intellectual ownership.
This process-focused approach protects scholarly standards while preserving fairness in access.
- Limitations and Open Questions
This argument is speculative. Uncertainties remain: how often outsider–AI collaborations reach examinability; whether review cultures treat AI disclosures differently when they come from unaffiliated authors; whether AI scaffolding narrows conceptual diversity; and whether lowering the threshold increases high-variance submissions.
A central question is whether LLMs encode dominant academic perspectives in ways that counteract levelling. If models disproportionately reflect prevailing traditions, their scaffolding may marginalise alternative viewpoints even as they widen access to participation.
Future investigation could explore whether outsider–AI manuscripts are increasing on preprint servers, how policy changes affect unaffiliated authors and ESL writers, and whether review systems can balance openness with noise-management without reverting to status heuristics.
- Conclusion
Generative AI may function as a leveller at the point where ideas become examinable manuscripts. It does not equalise expertise or institutional power, but it may allow more outsiders to cross a threshold that previously excluded them.
If review systems conflate iterative co-writing with undirected generation and treat it as inherently suspect, the threshold narrows again—especially for those who rely on AI as their only structural support. If peer review focuses on ideas and reasoning, AI may broaden participation without undermining rigour.
Whether outsider–AI collaborations will produce valuable theoretical insight remains uncertain. What is clear is that policy choices—not technical capabilities—will determine who reaches the starting line. This is both an epistemic question and a matter of justice. Progress now depends on empirical work: without data on authorship patterns, reviewer behaviour and conceptual diversity, it will remain difficult to judge whether AI is widening participation or merely changing its surface form.
- Implications for Future Research
This paper focuses narrowly on the participation threshold, but several broader lines of inquiry follow from the argument. Each reflects an open question about how generative AI reshapes access to theoretical work and how scholarly communities might respond.
A first line concerns empirical patterns of outsider participation. At present, there is no systematic evidence about whether AI assistance leads to an increase in examinable manuscripts from unaffiliated authors. Preprint metadata does not reliably distinguish outsiders from insiders, and journals rarely track author background in ways that would illuminate this dynamic. Even coarse indicators of change would help determine whether the levelling effect proposed here is occurring in practice or remains conceptual.
A second line relates to the norms and behaviours of peer reviewers. If AI disclosures are treated differently depending on whether they come from institutional or non-institutional authors, then a structural asymmetry may re-emerge. Studying reviewer responses to anonymised examples of AI-assisted writing could clarify whether tool use influences assessments of credibility, rigour or legitimacy.
A third avenue concerns the epistemic effects of AI scaffolding. If large language models consistently normalise particular styles, framings or argumentative structures, this may influence which kinds of outsider ideas are seen as viable. Understanding whether AI-supported writing tends to reinforce mainstream academic patterns, reduce conceptual diversity or disproportionately shape the expression of marginal ideas would help clarify when AI functions as a leveller and when it acts as a filter.
A final line of inquiry lies in policy design. Journals and preprint platforms are still experimenting with AI guidelines, often under pressure to deter fraudulent submissions. Developing policies that preserve integrity while supporting fair participation requires empirical grounding, stakeholder consultation and clarity about what authorship entails. Practical heuristics for evaluating AI-assisted manuscripts could be tested, refined and validated through pilot programmes.
Taken together, these strands suggest that the central issues raised here are not purely technical but institutional and epistemic. Understanding how participation thresholds shift will require attention to author experience, reviewer judgement, disciplinary norms and the broader ecology of scholarly communication. Developing these lines of inquiry would not only clarify AI’s effects on participation but also help determine whether generative models are emerging as new boundary mechanisms in knowledge-making.
AI Assistance and Authorial Responsibility
This paper was produced through an iterative co-writing process involving large language models. AI tools were used to propose structural outlines, suggest alternative formulations, surface objections and assist with revision. All intellectual decisions, arguments and interpretations were developed and finalised by the author. The AI systems used were tools, not contributors.