Skip to main content

AI Policy Foundation

AI behavior boundaries in plain English.

This public AI policy is written as AgentAlly's behavioral truth layer: what the AI helps with, what it does not do, when humans must review and approve, what uses are prohibited, and why current assistive behavior should never be mistaken for default autonomous operation.

Draft for Launch ReviewAssistive Mode Only

Launch foundation draft for counsel, product, and trust-review alignment. This is the plain-English behavioral boundary layer for AgentAlly's current assistive AI posture. If direct-recipient or autonomous modes are ever introduced, they must be enabled and documented separately rather than inferred from this page.

Last Updated
April 18, 2026
Effective Date
Pending final launch publication

Defined Terms

These terms keep the AI boundary language consistent with the legal and product surfaces around it.

AI Output

Any draft, summary, recommendation, classification, generated content, or other machine-generated output produced by AgentAlly.

External Communication

Any email, SMS, call script, direct message, document transmittal, marketing asset, or other content intended to be sent, shared, or delivered outside the Service.

Approved Communication

A specific External Communication that an authorized user reviewed and affirmatively approved for transmission or release.

Assistive Mode

The current launch posture in which AgentAlly helps prepare, summarize, organize, and recommend work while humans retain approval and final-decision authority.

Direct-Recipient Mode

Any future mode in which AI could communicate directly with an external recipient, auto-reply, or send without the current per-item approval flow. This is not the default posture described by this policy.

Section 1

Purpose and Scope

This policy explains, in plain English, how AgentAlly's AI features are meant to behave at launch. It is the user-readable behavioral policy layer that sits alongside the Terms of Service and Privacy Policy, rather than replacing them.

The goal is to make the biggest trust questions obvious: what the AI helps with, what it does not do, when a human must step in, what uses are prohibited, what should not be relied on, what happens when the system is wrong, and how future direct-recipient or autonomous modes would be treated separately.

This policy is intentionally product-specific. It is not a generic model-provider policy, not a full statutory digest, and not the place where AgentAlly tries to hide launch-critical boundaries in fine print.

Section 2

What AgentAlly AI Does

AgentAlly may generate drafts, summarize notes and communications, organize work and context, recommend next steps, surface priorities, and help stage outbound actions for review. The product is designed to help licensed professionals move faster on preparation and coordination while staying inside visible trust boundaries.

That can include turning voice notes into structured follow-ups, preparing review-ready email or SMS drafts, summarizing transaction context, organizing contact or deal history, and recommending a next move or workflow sequence based on workspace context.

In other words, AgentAlly is built to assist thinking and preparation. It is not presented as an invisible substitute for human judgment or approval.

Section 3

Human Review and Approval Rule

Users remain responsible for reviewing AgentAlly outputs before relying on them, using them, or approving them. Human review is not ceremonial. It means the user has a meaningful chance to inspect the actual content, change it, reject it, or replace it before a sensitive action occurs.

External Communications require human review and approval before sending unless a separately documented feature clearly provides otherwise. A draft becomes an Approved Communication only when a user intentionally approves that specific send or release.

Connected accounts, saved preferences, templates, or prior approvals do not create blanket permission for silent future sends in current Assistive Mode. If content changes, approval needs to track the reviewed version rather than an older or different draft.

  • Users must have the opportunity to edit, reject, or replace proposed content before send.
  • Approval is an intentional user action, not a hidden default or one-time setup that silently authorizes future sends.
  • If a future Direct-Recipient Mode is ever offered, it must be separately enabled, separately governed, and clearly distinguished from current Assistive Mode.

Section 4

What AgentAlly AI Does Not Do

AgentAlly does not guarantee accuracy, completeness, timeliness, originality, compliance, or suitability for a specific client, transaction, or jurisdiction. It does not become a broker of record, attorney, lender, title company, escrow provider, or other licensed professional just because it generated a draft or recommendation.

AgentAlly does not silently become a default autonomous sender because an integration exists or a workflow has been configured. It may prepare or stage outbound actions, but current launch behavior should not be read as authority to act without review.

AgentAlly also does not make final consequential decisions on the user's behalf. The product can help frame choices, but it should not be the final deciding authority for regulated communications, housing decisions, contract positions, disclosures, or other sensitive judgment calls.

Section 5

Output Limitations and Non-Reliance

AI Output may be inaccurate, incomplete, outdated, inconsistent, biased, generic, or inappropriate for the task at hand. The same prompt may produce different results, and outputs may reflect missing context, ambiguous instructions, flawed source data, or stale information.

Users must not rely on AgentAlly as the sole or primary basis for consequential decisions in sensitive domains. That includes decisions affecting legal obligations, housing access, contract positions, regulated communications, disclosures, or other matters where qualified human review and lawful authority are required.

When AgentAlly is wrong, the intended workflow is to slow down, not push through. Edit the output, reject it, replace it, or escalate it to the right human reviewer. Do not treat the existence of an AI draft as evidence that the draft is safe to use.

  • Review facts, names, dates, property details, consent status, tone, and recipient fit before use.
  • Check for stale assumptions, omitted context, or language that may be too generic, too strong, or too risky for the specific transaction.
  • Use qualified human review when the issue crosses into legal, brokerage, lender, title, escrow, fair-housing, or other licensed-professional judgment.

Section 6

Product and Professional Boundaries

AgentAlly is not legal advice, not definitive contract or disclosure interpretation, and not a substitute for required attorney, broker, lender, title, escrow, tax, insurance, appraisal, or other licensed-professional review.

The product may help prepare materials that later receive professional review, but it should not be treated as the final authority on what a contract means, whether a disclosure is sufficient, whether a communication satisfies law, or whether a transaction decision is appropriate.

Users remain responsible for deciding when human supervision, broker sign-off, or professional review is required. This includes situations involving fair housing, state licensing rules, legal rights, trust money, disclosures, contract execution, or other materially sensitive matters.

  • Do not use the Service as a substitute for broker supervision where broker review is required.
  • Do not rely on generated text as final contract, deed, or other execution-ready legal documentation without appropriate review and authority.
  • Do not treat compliance-related suggestions as a promise that the underlying workflow is legally cleared.

Section 7

Prohibited Uses and Acceptable Use Rules

The following uses are prohibited because they conflict with AgentAlly's launch posture, legal boundaries, and real-estate-sensitive risk surface. These examples are intentionally specific to the product and are not the only prohibited uses.

Discriminatory housing conduct

  • Using AgentAlly to create discriminatory housing ads, targeting, steering, segmentation, neighborhood recommendations, or lead-ranking logic.
  • Requesting or approving content that excludes, discourages, or preferences people based on protected characteristics or close proxies.
  • Using outputs to decide who should see a property, receive outreach, or be routed differently in ways that violate fair-housing or anti-discrimination rules.

Unlawful or deceptive outreach

  • Generating or sending spam, unlawful telemarketing, robotexts, fake reviews, forged endorsements, or consent-bypassing marketing.
  • Impersonating a person or organization, or presenting content as personally authored, reviewed, or approved when that is false.
  • Hiding AI involvement where disclosure is legally required or where omission would be materially misleading.

Approval, safety, or audit bypass

  • Trying to bypass approval gates, reuse stale approval on changed content, tamper with audit controls, or defeat policy restrictions.
  • Prompting the system to behave as if a send was approved when it was not.
  • Attempting to disable or work around product controls that are meant to preserve human review, integrity checks, or accountability.

Improper reliance in sensitive domains

  • Using outputs as the sole or primary basis for consequential legal, housing, compliance, or contract decisions without qualified human review and authority.
  • Using the Service to prepare final-execution legal or contract materials without appropriate human review, supervision, or legal authority.
  • Treating AI-generated content as definitive contract interpretation, disclosure sufficiency, brokerage supervision, or licensed-professional judgment.

Fraud, abuse, and other unlawful conduct

  • Using the Service for harassment, defamation, fraud, privacy invasion, unlawful surveillance, or other abusive conduct.
  • Uploading or processing data without rights or authority to do so.
  • Using AgentAlly in violation of law, third-party rights, or platform rules, even if the prompt looks operational or routine.

Section 8

Fair-Housing and Anti-Discrimination Boundaries

AgentAlly must not be used for discriminatory housing advertising, steering, targeting, segmentation, recommendation, or communication practices. Protected-class discrimination and proxy discrimination are out of bounds even if they are framed as marketing optimization or workflow efficiency.

Users must review listing copy, neighborhood summaries, recommendation queues, follow-up suggestions, and staged communications for language or logic that could exclude, channel, discourage, or preference people based on protected characteristics or close proxies.

Examples of risk areas include neighborhood descriptions, school commentary, "safe" or "family-friendly" positioning, lead prioritization, audience segmentation, or any attempt to infer who belongs in or should be kept away from a particular property, area, or outreach stream.

  • Do not ask AgentAlly to rank or route people based on race, color, religion, sex, disability, familial status, national origin, or other protected characteristics.
  • Do not use zip code, language, household makeup, school-preference phrasing, or neighborhood stereotypes as stand-ins for protected-class targeting or steering.
  • Do not use generated marketing or recommendation logic in a way that would narrow housing opportunity, visibility, or outreach unlawfully.

Section 9

Communications Law, Consent, and Deceptive Identity Boundaries

AgentAlly is an approval-gated drafting and workflow product, not a universal compliance engine. Users remain responsible for sender identification, consent, unsubscribe or revocation handling, quiet-hours restrictions, recording rules, platform policies, and any other obligations that apply to their outreach.

The Service must not be used for fake reviews, spam, unlawful telemarketing, consent-bypassing outreach, impersonation, or misleading claims about who wrote, approved, or sent a communication.

If a law, brokerage rule, MLS rule, platform rule, or transaction context requires AI involvement to be disclosed, users remain responsible for making that disclosure. AgentAlly's own AI disclosure labels or document footers can support transparency, but they do not automatically resolve every downstream disclosure obligation.

  • Do not hide AI use where disclosure is legally required or where hiding it would be materially misleading.
  • Do not use the Service to spoof identities, fake human approval, or make it appear that a person personally authored or reviewed content when they did not.
  • Do not treat AgentAlly as a substitute for consent tracking, opt-out compliance, or channel-specific marketing rules.

Section 10

What These Rules Mean By Function

The table below shows how AgentAlly's core AI functions should be interpreted in current Assistive Mode. The pattern is consistent: the AI can help prepare and organize work, but the human keeps responsibility for approval, final judgment, and sensitive decisions.

Drafting

Assistive Mode
Helps With
  • Creates first-pass emails, SMS messages, summaries, narratives, and other review-ready content.
  • Can adapt voice notes, workflow context, or prior history into a usable draft faster than writing from scratch.
Does Not Do
  • Does not guarantee that the wording is accurate, compliant, or appropriate for the final recipient.
  • Does not remove the need to review facts, tone, property details, consent status, disclosures, or fair-housing-sensitive language.
Human Check
A human still decides whether the draft is acceptable, edits it if needed, and owns the final content that is used.

Summarizing

Assistive Mode
Helps With
  • Condenses notes, threads, files, and workspace history into faster-read summaries.
  • Surfaces key points, deadlines, and context that might otherwise be scattered.
Does Not Do
  • Does not promise that every relevant fact or nuance was captured.
  • Does not replace reading the underlying source when precision matters or the stakes are high.
Human Check
A human should confirm that the summary did not omit, distort, or overstate important details before acting on it.

Recommending

Assistive Mode
Helps With
  • Suggests next steps, priorities, queue items, and possible workflow moves based on available context.
  • Can help users think through options more quickly and notice follow-up work that might otherwise be missed.
Does Not Do
  • Does not become the final decision-maker for regulated, legal, housing, or other consequential choices.
  • Does not authorize discriminatory targeting, unlawful outreach, or unsafe automation just because a recommendation exists.
Human Check
A human still decides whether the recommendation is appropriate, lawful, and worth following in the specific situation.

Staging actions

Assistive Mode
Helps With
  • Turns drafts or recommendations into prepared actions that can be reviewed, edited, or queued.
  • Makes the next move legible before execution rather than hiding it behind automation.
Does Not Do
  • Does not convert a staged action into permission to execute silently.
  • Does not make a queued draft inherently safe or current if context changes after staging.
Human Check
A human still needs to confirm the recipient, timing, content, and whether the action should happen at all.

Sending and external communications

Assistive Mode
Helps With
  • Supports approval-gated sends after the user reviews and approves the specific communication.
  • Can preserve approval and audit signals around outbound workflow steps.
Does Not Do
  • Does not default to silent auto-send, autonomous direct-recipient behavior, or one-time blanket approval in current Assistive Mode.
  • Does not take over consent, sender-identification, opt-out, quiet-hours, or other communications-law duties.
Human Check
A human still approves the specific communication and remains responsible for the final send decision and downstream compliance obligations.

Section 11

Correction, Override, and Issue Reporting

Correction and override are intended parts of the workflow, not signs that the product failed just because a human changed the draft. If AgentAlly produces content that is wrong, incomplete, stale, biased, unsafe, or simply not a fit for the situation, users should edit it, reject it, replace it, or decline to send it.

Users should report harmful, discriminatory, misleading, or otherwise problematic outputs through the available support path so the issue can be reviewed. Until a dedicated trust or policy inbox is published, launch-review questions or reports can be sent to the contact listed on this page.

If an issue may have legal, compliance, or client-impact consequences, the user should also follow their own correction, disclosure, escalation, and recordkeeping obligations. AgentAlly does not take over those obligations just because the first draft involved AI.

  • Treat "edit, reject, or replace" as a normal safety control rather than an exception path.
  • Report harmful, deceptive, discriminatory, or clearly unsafe outputs instead of working around them silently.
  • Use human escalation when the issue touches legal advice, contract language, disclosures, fair housing, or regulated outreach.

Section 12

Logging, Auditability, and Accountability

AgentAlly may keep records of generations, prepared actions, edits, approvals, rejections, send attempts, send results, audit events, and related metadata for support, security, compliance, dispute resolution, abuse prevention, and product integrity purposes.

These records are meant to preserve accountability and help explain what happened, especially around approval-gated workflows. They do not mean that AgentAlly staff manually reviews every output before it is used. The primary control remains the user's own review and approval.

Some product surfaces also include AI disclosure labeling or audit fields tied to generated content. Those controls help make AI involvement and approval history visible, but users still need to decide whether the final output is appropriate to use.

Section 13

Future Direct-Recipient or Autonomous Modes

Recheck After Lane C

This policy covers current Assistive Mode only. It should not be stretched to silently authorize future direct-recipient, auto-reply, auto-send, or other autonomous behavior.

If AgentAlly ever introduces Direct-Recipient Mode or a materially more autonomous workflow, that mode must be separately documented, separately enabled, and accompanied by clearer controls, disclosures, boundaries, and human handoff options appropriate to the risk.

Until then, the existence of AI drafting, recommendations, staged actions, or current approval-gated sends should not be interpreted as permission for silent autonomous operation.

Review Note

This section should be re-reviewed after Lane C approval-boundary evals and security review so the public language continues to match the enforcement layer.

Section 14

Policy Updates and Contact

AgentAlly may update this policy as the product, provider stack, trust controls, or legal posture changes. Material changes should be reflected through the website, product, email, or another reasonable notice method before the updated policy becomes operative, unless earlier changes are required for security, abuse prevention, or legal reasons.

Questions, launch-review feedback, or issue reports about this draft can be sent to ben@getagentally.com. A more specific trust, compliance, or policy contact path may be added later without changing the core behavioral boundaries described here.

Launch Review Questions

Explicit seams for product and counsel review

  • Confirm the final public reporting path for harmful-output, trust, and policy issues before launch, including whether a dedicated trust or compliance inbox will exist.
  • Confirm how public help-center and marketing surfaces should describe AI disclosure expectations for external recipients without overclaiming unsettled legal rules.
  • Confirm whether any launch channel besides documents automatically includes an AI disclosure marker, and where user-added disclosure still remains their responsibility.
  • Re-review the future-mode section after Lane C approval-boundary evals and security review so the public wording continues to match enforcement reality.
  • Keep any future direct-recipient, auto-reply, or autonomous-send feature out of scope for this policy unless and until it receives separate product, legal, and trust documentation.
Validation

Product-truth checks for the draft

  • The policy says AgentAlly helps draft, summarize, organize, recommend, and stage work without implying default autonomous authority.
  • The human review and approval rule is explicit, operationally meaningful, and tied to specific external communications rather than vague oversight language.
  • Outputs are described as potentially wrong, incomplete, stale, biased, generic, or inappropriate, and the document does not encourage sole reliance in sensitive domains.
  • Professional-boundary language clearly says the Service is not legal advice, not definitive contract interpretation, and not a substitute for required licensed-professional review.
  • Prohibited uses explicitly cover discriminatory housing conduct, deceptive identity use, unlawful outreach, fake reviews, consent bypass, and attempts to evade approval or audit controls.
  • Logging and accountability language is present without implying that AgentAlly staff manually reviews every output before use.
  • Current Assistive Mode is clearly separated from any future direct-recipient or autonomous mode.

Questions, reports, or launch-review feedback

Until the final launch version is published, questions about this AI policy foundation draft can be sent to ben@getagentally.com.

AI-assisted draft prepared for launch counsel and product review on April 18, 2026. This page reflects AgentAlly's current assistive AI posture and should be rechecked after Lane C trust evals and security review before public launch.