Transparency

The algorithm is not a secret.

Most platforms hide how their systems work. We do the opposite. This page documents how the feed ranks content, what data we collect, how moderation works, and how product decisions get made. If we ever change any of these, we will publish the change here before it ships.

How the feed ranks work.

Every publication that appears in your feed is scored by a formula you can see:

score = (0.4 × relevance) + (0.3 × recency) + (0.2 × engagement_quality) + (0.1 × connection_proximity)

Relevance (40%) — how closely a publication matches your stated professional interests, industries, and thesis.

Recency (30%) — a gentle decay so newer work surfaces first, tuned to avoid the rage-refresh loop that other platforms exploit.

Engagement quality (20%) — thoughtful comments weighted far above likes. Length, reply depth, and member reputation all count.

Connection proximity (10%) — work from members you are connected to or who are connected to your connections surfaces slightly higher.

What is explicitly not in the formula: outrage, inflammatory content, time-on-platform optimization, paid boosting.

What we collect — and do not.

We collect:

  • Your profile information that you provide
  • Your publications, comments, and connections
  • Basic operational metadata (login times, device type, for security)
  • What you search for and filter by, for directory and feed personalization only

We do not collect:

  • Off-platform browsing via tracking pixels or third-party cookies
  • Location beyond country-level (unless you explicitly add a city)
  • Contacts, calendar, or other device-level data
  • Behavioral patterns for the purpose of advertising targeting

How moderation works.

All content is screened by an automated moderation layer (OpenAI's moderation API) before publication. Content that triggers high-confidence harm signals is blocked. Content that triggers medium-confidence signals is queued for human review. Everything else publishes immediately.

Human review is handled by platform admins in the Crawl phase, and will expand to a Community Guardian model — trusted members in the top Sifa reputation tier — as the platform grows. Every moderation decision is logged, appealable, and auditable by the Community Advisory Council.

How product decisions get made.

Roadmap priorities are proposed by the founding team, reviewed by the Community Advisory Council quarterly, and published here before they ship. Major changes — new features, tier pricing, data handling — go through a 30-day comment window on the council before being finalized.

We will never A/B test dark patterns. We will never ship a feature designed to increase platform dependence at the cost of member wellbeing. If you think a feature we have shipped violates that standard, email governance@newbws.com.