Richard Schnitzler (rschnitzler@shield-legal.com)
2025-05-20 14:19:08

Here’s a detailed breakdown of the use cases for each scenario—LawRuler, Litify, and custom Salesforce build—focusing on how each platform handles typical legal practice needs: intake, case/matter management, automation, communications, reporting, and integrations.

---

  1. LawRuler (Turnkey Legal CRM + Case Management)

Use Case A: Solo or Small Law Firms (1–10 users)

Client Intake: Web-based forms with branching logic. Minimal setup to route leads to intake team or attorneys.

Automation: Built-in email/SMS campaigns for intake follow-up. You don’t need a developer.

Case Management: Templates for personal injury, workers’ comp, etc. It’s plug-and-play.

Communications: Native SMS, call tracking, and VOIP integration out-of-box. Phone records tied to client files.

Document Handling: Upload & eSign (DocuSign style). Not deeply structured metadata.

Reporting: Dashboards focused on marketing ROI, signed cases, settled cases.

Best Use Case: A firm wants to scale intake and simple case management fast without technical overhead.

Limitations:

Not designed for complex multi-party litigation or nuanced workflows.

Limited custom object modeling.

Reporting is functional but not granular like Salesforce.

---

  1. Litify on Salesforce (Structured Legal Platform on Force.com)

Use Case B: Medium to Large Firms (10–1000+ users)

Client Intake: Uses Litify’s Intakec object with automation to evaluate criteria and convert to Matterc. Can be customized per practice area.

Case/Matter Management: Hierarchical case stages, role-based layouts (paralegal, attorney, etc.), and court-specific fields. Personal injury, medmal, class action templates included.

Automation:

Flows or Apex: Create tasks based on intake type.

Milestone tracking: Show SLA warnings for discovery/filing deadlines.

Document generation & approval for standard filings.

Communications: Can use Salesforce’s integrations with Outlook, Gmail, SMS (via Twilio or other managed packages), and even voice services.

Document Management:

Native Salesforce Files + optional integration with NetDocuments/iManage.

Version control and access permissions.

Reporting: Full Salesforce reporting suite + Litify-specific dashboards. Metrics per Matter, User, Timekeeper, Referral Source, etc.

Best Use Case: Enterprise-scale firm managing thousands of cases, multiple attorneys, referrals, and complex workflows.

Limitations:

Higher cost (Salesforce licensing + Litify fees).

Requires skilled Salesforce Admin or Developer team to optimize.

---

  1. Custom Salesforce Build (DIY Legal CRM/Case System)

Use Case C: Tech-Savvy Legal Teams or Complex/Niche Practice Areas

Client Intake:

Build a custom Intake__c object.

Use screen flows to walk through client eligibility, custom qualifying logic.

Automatically generate Matter__c and related Account, Contact records if approved.

Case Management:

Custom Matter__c object with record types (Civil, Criminal, Immigration, etc.).

Related lists for Eventsc, CourtDatesc, Tasks, Notesc, OpposingCounselc.

Custom metadata-driven fields (e.g., dynamic questions based on matter type).

Automation:

Apex Triggers/Flows for dynamic task creation.

Notification logic for approaching court dates.

API for lead intake or e-signature sync.

Communications:

Connect to Twilio, RingCentral, or custom VOIP via Salesforce APIs. Email-to-Case for tracking inbound communication tied to the Matter.

Document Management:

Generate pleadings with tools like Conga or Drawloop. Store discovery docs with versioning and permissioning.

Reporting:

Full Salesforce Reporting + Dashboards.

Track case duration, court results by county, success rate by attorney, etc.

Best Use Case: Mid-sized or growing firm with internal dev/admin resources, looking for tailored process control.

Limitations:

Steep learning curve. Build time and testing are significant. Requires governance and documentation to avoid technical debt.

Richard Schnitzler (rschnitzler@shield-legal.com)
2025-05-20 14:21:17

Setup Time Days Weeks Months Customization Low Medium-High Unlimited Cost Medium High Medium-High Legal-Specific Workflows Yes (fixed) Yes (configurable) You build it Intake Automation Yes Yes Yes (requires Flow/Apex) Document Generation Built-in eSign Salesforce + Partner Tools Partner Tools / Custom Code Role-Based Access Basic Full Salesforce RLS Fully Customizable Integrations Limited Extensive via Salesforce AppEx Full API access / AppEx Best For Small Firms Large Legal Ops Mid-Sized, Complex Practices

Richard Schnitzler (rschnitzler@shield-legal.com)
2025-05-29 13:16:45

Yes — Litify adds a structured layer of legal-specific objects (custom Salesforce objects) on top of the Salesforce platform. These objects form the backbone of its case/matter management, intake workflows, and operational tracking.

Here’s a list of the core Litify objects, organized by function, along with brief definitions:


⚖️ Core Legal Process Objects

Object API Name Label Definition / Purpose

Intakec Intake Used to gather initial client or case info. May include qualification questions, leads, referrals. Matterc Matter The central case object (equivalent to a "Case File"). Tracks litigation lifecycle, status, parties, court info, etc. LitifyReferralc Referral Tracks inbound or outbound case referrals, referral partners, commissions, etc. LitifyMatter_Typec Matter Type Defines templates or configurations for different types of matters (e.g., PI, MedMal).

---

👥 People and Roles

Object API Name Label Definition / Purpose

LitifyPartyc Party Represents any person/entity involved in a Matter: client, opposing counsel, adjuster, etc. LitifyPartyRolec Party Role Maps a Party to a specific role in the Matter (e.g., Plaintiff, Defendant, Witness). Provider__c Provider External parties like doctors, chiropractors, etc. Often used for scheduling and billing.

---

📋 Tasks, Calendars, and Workflows

Object API Name Label Definition / Purpose

Litify_Taskc Litify Task Enhanced task with legal-specific fields like SLA tracking, phase/stage, related Matter. Eventc (standard) Event Used for hearings, deadlines, appointments. Integrated with Salesforce calendar. Checklistc Checklist Used to define procedural steps in a legal process (e.g., Discovery checklist). ChecklistItemc Checklist Item Individual steps within a Checklist.

---

📁 Documents and Notes

Object API Name Label Definition / Purpose

LitifyNotec Note Used for tracking observations, updates, or notes tied to a Matter or Intake. Litify_Documentc Document Represents a structured metadata entry for uploaded files (e.g., pleadings, evidence).

---

📊 Timekeeping & Finance (Optional Modules)

Object API Name Label Definition / Purpose

LitifyBillingRecordc Billing Record Used to track billable time entries (if Timekeeping is enabled). Litify_Invoicec Invoice Structured representation of billed time/services, if applicable.

---

🔁 Automation and Tracking

Object API Name Label Definition / Purpose

LitifyMatterStagec Matter Stage Tracks which phase (e.g., Discovery, Trial, Settlement) a Matter is currently in. LitifyActivityc Activity Log Central log of all actions (calls, emails, tasks) across the platform. LitifyConfiguration__c Litify Configuration Admin-level controls for templates, SLAs, automations, etc.

---

🔄 Integrations/Sync Objects

Object API Name Label Definition / Purpose

LitifyEmailLogc Email Log Stores metadata for synced emails (if using Litify's email tools or Einstein Activity Capture). LitifyExternalReferralc External Referral Tracks matters referred in/out via Litify's referral network. .

Richard Schnitzler (rschnitzler@shield-legal.com)
2025-06-04 13:12:41

https://www.youtube.com/shorts/yOvhoy0z9jY

YouTube
Chris Boden (https://www.youtube.com/@Physicsduck)
Richard Schnitzler (rschnitzler@shield-legal.com)
2025-06-04 14:20:27

```// ======= CONFIGURATION ======= Id matterId = 'a0LVT000008cBp32AE'; // <- Replace with actual Matter__c Id Boolean dryRun = true; // Set to false to actually delete

// ======= LOGIC ======= System.debug('--- DRY RUN MODE: ' + dryRun + ' ---');

List<SObject> toDelete = new List<SObject>();

try { //Matter Financial Records toDelete.addAll([SELECT Id FROM JPMassTortFinancialDetailc WHERE Matterc = :matterId]);

// Matter Medical Records 
toDelete.addAll([SELECT Id FROM JP_Medical_Record_Tracker__c WHERE Matter__c = :matterId]);

// Matter Related Parties 
List&lt;sObject&gt; partyIdList = [SELECT Id, litify_pm__Party__c FROM litify_pm__Role__c WHERE litify_pm__Matter__c = :matterId];
system.debug('partyIdList: '+partyIdList); 
litify_pm__Role__c partyId = partyIdList.[0]; 
toDelete.addAll([SELECT Id FROM Account WHERE SL_Parent_Matter_ID__c = :partyId.litify_pm__Party__c]);


// Final: Matter itself
SObject matterRecord = [SELECT Id FROM litify_pm__Matter__c WHERE Id = :matterId LIMIT 1];
toDelete.add(matterRecord);

// Logging
System.debug('Found ' + toDelete.size() + ' records to delete:');
for (SObject s : toDelete) {
    System.debug(s.getSObjectType() + ' - ' + s.Id);
}

// Actual deletion (if dryRun is false)
if (!dryRun) {
    delete toDelete;
    System.debug('Deletion complete.');
} else {
    System.debug('Dry run complete. No records were deleted.');
}

} catch (Exception ex) { System.debug('Error during deletion: ' + ex.getMessage()); }```

Richard Schnitzler (rschnitzler@shield-legal.com)
2025-06-04 16:13:01

// ======= CONFIGURATION ======= Id intakeId = 'a1UXXXXXXX'; // Replace with your real Intake__c Id Boolean dryRun = true; Boolean includeMatterCascade = true; // Set to true to delete Matter + its relateds

// ======= LOGIC ======= System.debug('\-\-\- DRY RUN MODE: ' + dryRun + ' \-\-\-');

List&lt;SObject&gt; toDelete = new List&lt;SObject&gt;();

try { // Get Intake and related references Intake__c intake = [ SELECT Id, Name, Matter__c, Lead__c, Primary_Contact__c FROM Intake__c WHERE Id = :intakeId LIMIT 1 ];

`Id matterId = intake.Matter__c;`

`// Delete objects directly related to the Intake
toDelete.addAll([SELECT Id FROM Task WHERE WhatId = :intake.Id]);
toDelete.addAll([SELECT Id FROM Event WHERE WhatId = :intake.Id]);
toDelete.addAll([SELECT Id FROM Timeline_Entry__c WHERE Intake__c = :intake.Id]);
toDelete.addAll([SELECT Id FROM Custom_Object__c WHERE Intake__c = :intake.Id]); // example custom
toDelete.add(intake); // finally delete the intake itself`

`// Optional: Cascade delete from related Matter
if (includeMatterCascade &amp;&amp; matterId != null) {
    List&lt;SObject&gt; matterRelated = new List&lt;SObject&gt;();
    matterRelated.addAll([SELECT Id FROM Task WHERE WhatId = :matterId]);
    matterRelated.addAll([SELECT Id FROM Event WHERE WhatId = :matterId]);
    matterRelated.addAll([SELECT Id FROM Litify_PM__Phase__c WHERE Litify_PM__Matter__c = :matterId]);
    matterRelated.addAll([SELECT Id FROM Timeline_Entry__c WHERE Matter__c = :matterId]);
    matterRelated.addAll([SELECT Id FROM Custom_Object__c WHERE Matter__c = :matterId]); // example
    matterRelated.add([SELECT Id FROM Matter__c WHERE Id = :matterId]);`

    `toDelete.addAll(matterRelated);
}`

`// LOG records
System.debug('Found ' + toDelete.size() + ' records to delete:');
for (SObject s : toDelete) {
    System.debug(s.getSObjectType() + ' - ' + s.Id);
}`

`// PERFORM DELETE if not dry run
if (!dryRun) {
    delete toDelete;
    System.debug('Deletion completed.');
} else {
    System.debug('Dry run complete. No records deleted.');
}`

} catch (Exception ex) { System.debug('ERROR: ' + ex.getMessage()); }

Richard Schnitzler (rschnitzler@shield-legal.com)
2025-06-09 13:10:48

&lt;?xml version="1.0" encoding="UTF-8"?&gt; &lt;LightningComponentBundle xmlns="<a href="http://soap.sforce.com/2006/04/metadata">http://soap.sforce.com/2006/04/metadata</a>"&gt; &lt;apiVersion&gt;60.0&lt;/apiVersion&gt; &lt;isExposed&gt;true&lt;/isExposed&gt; &lt;targets&gt; &lt;target&gt;lightning__FlowScreen&lt;/target&gt; &lt;/targets&gt; &lt;targetConfigs&gt; &lt;targetConfig targets="lightning__FlowScreen"&gt; &lt;property name="questions" type="String" label="Questions JSON" /&gt; &lt;property name="recordId" type="String" label="Record ID" /&gt; &lt;/targetConfig&gt; &lt;/targetConfigs&gt; &lt;/LightningComponentBundle&gt;

Richard Schnitzler (rschnitzler@shield-legal.com)
2025-06-09 13:11:43

JS:

`import { LightningElement, api, track } from 'lwc';

export default class DynamicQuestionnaireFlow extends LightningElement { @api questions; // JSON string from Flow @api recordId; @track questionList = []; @track answers = {};

connectedCallback() {
    try {
        this.questionList = JSON.parse(this.questions);
    } catch (e) {
        console.error('Invalid questions JSON:', e);
    }
}

handleInputChange(event) {
    const qName = event.target.dataset.qname;
    let value = event.target.type === 'checkbox' ? event.target.checked : event.target.value;
    this.answers[qName] = value;
}

@api
validate() {
    // Optional: enforce required fields
    return { isValid: true };
}

@api
get flowData() {
    return {
        responses: JSON.stringify(this.answers)
    };
}

}`

Richard Schnitzler (rschnitzler@shield-legal.com)
2025-06-09 13:12:35

HTML: `<template> <template if:true={questionList}> <template for:each={questionList} for:item="q"> <div key={q.name} class="slds-m-bottomsmall"> <label class="slds-form-element_label" for={q.name}>{q.label}</label>

            &lt;template if:true={q.isPicklist}&gt;
                &lt;select data-qname={q.name} onchange={handleInputChange} class="slds-input"&gt;
                    &lt;template for:each={q.picklistValues} for:item="val"&gt;
                        &lt;option key={val} value={val}&gt;{val}&lt;/option&gt;
                    &lt;/template&gt;
                &lt;/select&gt;
            &lt;/template&gt;

            &lt;template if:true={q.isCheckbox}&gt;
                &lt;input type="checkbox" data-qname={q.name} onchange={handleInputChange}/&gt;
            &lt;/template&gt;

            &lt;template if:true={q.isText}&gt;
                &lt;input type="text" data-qname={q.name} onchange={handleInputChange} class="slds-input"/&gt;
            &lt;/template&gt;

            &lt;template if:true={q.isDate}&gt;
                &lt;input type="date" data-qname={q.name} onchange={handleInputChange} class="slds-input"/&gt;
            &lt;/template&gt;

            &lt;template if:true={q.isNumber}&gt;
                &lt;input type="number" data-qname={q.name} onchange={handleInputChange} class="slds-input"/&gt;
            &lt;/template&gt;
        &lt;/div&gt;
    &lt;/template&gt;
&lt;/template&gt;

</template>`

Richard Schnitzler (rschnitzler@shield-legal.com)
2025-06-09 13:13:24

APEX:

`public with sharing class QuestionnaireService { @AuraEnabled public static String getQuestionsForRecord(Id recordId) { // STEP 1: Determine object + record type SObject record = [SELECT RecordType.DeveloperName FROM Applicationc WHERE Id = :recordId LIMIT 1]; String objectApi = 'Applicationc'; String recordType = ((Application__c)record).RecordType.DeveloperName;

    // STEP 2: Find matching Questionnaire
    List&lt;Questionnaire__mdt&gt; matches = [
        SELECT DeveloperName
        FROM Questionnaire__mdt
        WHERE Object_API_Name__c = :objectApi AND Record_Type_Name__c = :recordType AND Is_Active__c = true
        ORDER BY Display_Order__c ASC
    ];

    if (matches.isEmpty()) throw new AuraHandledException('No matching questionnaire');

    String qName = matches[0].DeveloperName;

    // STEP 3: Get related questions
    List&lt;Question__mdt&gt; questions = [
        SELECT DeveloperName, Question_Text__c, Question_Type__c, Is_Required__c, Picklist_Values__c
        FROM Question__mdt
        WHERE Questionnaire__c = :qName
        ORDER BY Display_Order__c ASC
    ];

    // STEP 4: Build JSON DTOs
    List&lt;Map&lt;String, Object&gt;&gt; questionJson = new List&lt;Map&lt;String, Object&gt;&gt;();

    for (Question__mdt q : questions) {
        Map&lt;String, Object&gt; entry = new Map&lt;String, Object&gt;{
            'name' =&gt; q.DeveloperName,
            'label' =&gt; q.Question_Text__c,
            'type' =&gt; q.Question_Type__c,
            'required' =&gt; q.Is_Required__c,
            'isCheckbox' =&gt; q.Question_Type__c == 'Checkbox',
            'isPicklist' =&gt; q.Question_Type__c == 'Picklist',
            'isText' =&gt; q.Question_Type__c == 'Text',
            'isDate' =&gt; q.Question_Type__c == 'Date',
            'isNumber' =&gt; q.Question_Type__c == 'Number'
        };
        if (q.Question_Type__c == 'Picklist' &amp;&amp; String.isNotBlank(q.Picklist_Values__c)) {
            entry.put('picklistValues', q.Picklist_Values__c.split(','));
        }
        questionJson.add(entry);
    }

    return JSON.serialize(questionJson);
}

}`

Richard Schnitzler (rschnitzler@shield-legal.com)
2025-06-09 13:18:51

public with sharing class SObjectUtils { @AuraEnabled(cacheable=true) public static String getObjectApiNameFromId(Id recordId) { if (recordId == null) { throw new AuraHandledException('Record ID is required.'); }

    `String keyPrefix = ((String)recordId).substring(0, 3);
    Map&lt;String, Schema.SObjectType&gt; globalDescribe = Schema.getGlobalDescribe();`

    `for (Schema.SObjectType sobjectType : globalDescribe.values()) {
        Schema.DescribeSObjectResult describe = sobjectType.getDescribe();
        if (describe.getKeyPrefix() == keyPrefix) {
            return describe.getName(); // Returns e.g., 'Account', 'Application__c', etc.
        }
    }`

    `throw new AuraHandledException('Unable to determine object type for ID: ' + recordId);
}

}`

Richard Schnitzler (rschnitzler@shield-legal.com)
2025-06-09 14:18:39

$oldPath = (Get-ItemProperty -Path 'Registry::HKEY_CURRENT_USER\Environment' -Name PATH).Path $newPath = $oldPath + ';C:\Program Files\sf\bin' Set-ItemProperty -Path 'Registry::HKEY_CURRENT_USER\Environment' -Name PATH -Value $ newPath

Richard Schnitzler (rschnitzler@shield-legal.com)
2025-06-11 13:44:06

public with sharing class RoleService { public static void updateMissingMatterFromIntake(Set<Id> roleIds) { List<Rolec> rolesToUpdate = new List<Rolec>();

    // Query Roles and related Intake and Matter
    List&lt;Role__c&gt; roles = [
        SELECT Id, Matter__c, Intake__c, 
               Intake__r.Matter__c
        FROM Role__c
        WHERE Id IN :roleIds
    ];

    for (Role__c role : roles) {
        if (role.Matter__c == null &amp;&amp; role.Intake__r != null &amp;&amp; role.Intake__r.Matter__c != null) {
            role.Matter__c = role.Intake__r.Matter__c;
            rolesToUpdate.add(role);
        }
    }

    if (!rolesToUpdate.isEmpty()) {
        update rolesToUpdate;
    }
}

}

Richard Schnitzler (rschnitzler@shield-legal.com)
2025-06-17 10:58:10

public with sharing class PDFGeneratorController { @AuraEnabled public Application__c application { get; set; }

public PDFGeneratorController() {
    Id appId = ApexPages.currentPage().getParameters().get('id');
    application = [SELECT FirstName__c, LastName__c, Email__c FROM Application__c WHERE Id = :appId];
}

}

Richard Schnitzler (rschnitzler@shield-legal.com)
2025-06-17 10:58:25

<apex:page renderAs="pdf" controller="PDFGeneratorController"> <div style="position: relative; width: 800px; height: 1000px;">

    &lt;!-- Background Image (optional, must be public or static resource) --&gt;
    &lt;img src="{!URLFOR($Resource.Application_Form_BG)}" style="position:absolute; top:0; left:0; width:800px; height:1000px;" /&gt;

    &lt;!-- Field Overlays --&gt;
    &lt;div style="position: absolute; top: 120px; left: 200px; font-size:12px;"&gt;{!application.FirstName__c}&lt;/div&gt;
    &lt;div style="position: absolute; top: 160px; left: 200px; font-size:12px;"&gt;{!application.LastName__c}&lt;/div&gt;
    &lt;div style="position: absolute; top: 200px; left: 200px; font-size:12px;"&gt;{!application.Email__c}&lt;/div&gt;

&lt;/div&gt;

</apex:page>

Richard Schnitzler (rschnitzler@shield-legal.com)
2025-06-17 11:06:54

public with sharing class DynamicPDFController { public String recordId { get; set; } public String templateName { get; set; } public SObject recordData { get; set; } public List<PDFFieldMapping__c> fieldMappings { get; set; }

public DynamicPDFController() {
    recordId = ApexPages.currentPage().getParameters().get('id');
    templateName = ApexPages.currentPage().getParameters().get('template');

    // Find the template field mappings
    fieldMappings = [
        SELECT Field_API_Name__c, Object_Name__c, Position_Top__c, Position_Left__c, Font_Size__c
        FROM PDF_Field_Mapping__c
        WHERE Template_Name__c = :templateName
    ];

    if (!fieldMappings.isEmpty()) {
        String sobjectType = fieldMappings[0].Object_Name__c;
        String soql = 'SELECT Id';

        for (PDF_Field_Mapping__c m : fieldMappings) {
            soql += ', ' + m.Field_API_Name__c;
        }

        soql += ' FROM ' + sobjectType + ' WHERE Id = :recordId';
        recordData = Database.query(soql);
    }
}

public String getFieldValue(String fieldApi) {
    if (recordData != null &amp;&amp; recordData.containsField(fieldApi)) {
        Object value = recordData.get(fieldApi);
        return value != null ? String.valueOf(value) : '';
    }
    return '';
}

}

Richard Schnitzler (rschnitzler@shield-legal.com)
2025-06-23 14:51:03

https://www.youtube.com/@statquest

YouTube
Richard Schnitzler (rschnitzler@shield-legal.com)
2025-06-24 11:30:18

String prefix = 'Litify'; // Change this to match your prefix filter (case-sensitive) List emptyObjects = new List(); Map recordCounts = new Map(); Integer queried = 0; Integer skipped = 0; Integer maxObjects = 100; Integer processed = 0;

for (Schema.SObjectType objType : Schema.getGlobalDescribe().values()) { if (processed >= maxObjects) break;

String objName = objType.getDescribe().getName();

if (!objName.startsWith(prefix)) continue; // prefix filter
if (!objType.getDescribe().isQueryable() || objType.getDescribe().isCustomSetting()) {
    skipped++;
    continue;
}

try {
    String query = 'SELECT count() FROM ' + objName;
    Integer count = Database.countQuery(query);
    if (count == 0) {
        emptyObjects.add(objName);
    } else {
        recordCounts.put(objName, count);
    }
    queried++;
    processed++;
} catch (Exception e) {
    System.debug('Skipped: ' + objName + ' - ' + e.getMessage());
    skipped++;
    processed++;
}

}

System.debug('========= OBJECT RECORD COUNTS ========='); for (String key : recordCounts.keySet()) { System.debug(key + ' → ' + recordCounts.get(key) + ' records'); }

System.debug('========= EMPTY OBJECTS ========='); for (String obj : emptyObjects) { System.debug(obj + ' has no records.'); }

System.debug('Queried: ' + queried + ', Skipped: ' + skipped + ', Total Processed: ' + processed);

Richard Schnitzler (rschnitzler@shield-legal.com)
2025-06-24 11:40:59

String prefix = 'Litify'; // Change this to match your prefix filter (case-sensitive) List emptyObjects = new List(); Map recordCounts = new Map(); Integer queried = 0; Integer skipped = 0;

for (Schema.SObjectType objType : Schema.getGlobalDescribe().values()) { String objName = objType.getDescribe().getName();

if (!objName.startsWith(prefix)) continue; // prefix filter
if (!objType.getDescribe().isQueryable() || objType.getDescribe().isCustomSetting()) {
    skipped++;
    continue;
}

try {
    String query = 'SELECT count() FROM ' + objName;
    Integer count = Database.countQuery(query);
    if (count == 0) {
        emptyObjects.add(objName);
    } else {
        recordCounts.put(objName, count);
    }
    queried++;
} catch (Exception e) {
    System.debug('Skipped: ' + objName + ' - ' + e.getMessage());
    skipped++;
}

}

System.debug('========= OBJECT RECORD COUNTS ========='); for (String key : recordCounts.keySet()) { System.debug(key + ' → ' + recordCounts.get(key) + ' records'); }

System.debug('========= EMPTY OBJECTS ========='); for (String obj : emptyObjects) { System.debug(obj + ' has no records.'); }

System.debug('Queried: ' + queried + ', Skipped: ' + skipped + ', Total objects: ' + (queried + skipped));

Richard Schnitzler (rschnitzler@shield-legal.com)
2025-06-24 14:32:10

String prefix = 'Litify'; // Change this to match your prefix filter (case-sensitive) List emptyObjects = new List(); Map recordCounts = new Map(); Integer queried = 0; Integer skipped = 0; Integer maxObjects = 100; Integer processed = 0;

for (Schema.SObjectType objType : Schema.getGlobalDescribe().values()) { if (processed >= maxObjects) break;

String objName = objType.getDescribe().getName();

if (!objName.startsWith(prefix)) continue; // prefix filter
if (!objType.getDescribe().isQueryable() || objType.getDescribe().isCustomSetting()) {
    skipped++;
    continue;
}

try {
    String query = 'SELECT count() FROM ' + objName;
    Integer count = Database.countQuery(query);
    if (count == 0) {
        emptyObjects.add(objName);
    } else {
        recordCounts.put(objName, count);
    }
    queried++;
    processed++;
} catch (Exception e) {
    System.debug('Skipped: ' + objName + ' - ' + e.getMessage());
    skipped++;
    processed++;
}

}

System.debug('========= OBJECT RECORD COUNTS ========='); for (String key : recordCounts.keySet()) { System.debug(key + ' → ' + recordCounts.get(key) + ' records'); }

System.debug('========= EMPTY OBJECTS ========='); for (String obj : emptyObjects) { System.debug(obj + ' has no records.'); }

System.debug('Queried: ' + queried + ', Skipped: ' + skipped + ', Total Processed: ' + processed);

Richard Schnitzler (rschnitzler@shield-legal.com)
2025-06-27 14:11:42

Standard Operating Procedure (SOP)

Title: API-Based Record Import Process for Matter, Account, Intake, and Matter Roles Version: 1.0 Owner: Salesforce Development Team Effective Date: [Insert Date] Review Frequency: Quarterly

---

  1. Purpose

To define the process for securely importing external data into Salesforce targeting the following key objects:

Account

Matter__c

Intake__c

MatterRole_c

---

  1. Scope

This SOP applies to all API-based data import processes initiated from external systems into Salesforce. It includes validation, transformation, and record linking procedures.

---

  1. Responsibilities

Role Responsibility

Integration Team Ensures external systems format and send data properly Salesforce Dev Team Maintains API endpoints, triggers, and test coverage QA / Admin Validates imported data in staging environment before production

---

  1. Preconditions

API authentication (OAuth / Named Credential) is configured

External system has valid mappings and required permissions

Related fields on Matterc, Intakec, and MatterRole_c are exposed via the API schema

Triggers and handlers support idempotent upserts

---

  1. Process Flow Overview

5.1 Record Import Hierarchy

  1. Account

  2. Intake__c (lookup to Account)

  3. Matter__c (lookup to Account and optionally Intake)

  4. MatterRole_c (lookup to Matter, and Contact or User as needed)

5.2 Sequence Diagram

External System → [Salesforce API Endpoint]
→ Upsert Account
→ Upsert Intakec (linked to Account)
→ Upsert Matter
c (linked to Account and Intakec)
→ Upsert MatterRole
c (linked to Matter_c + Contact/User)

---

  1. Detailed Steps

6.1 Authentication

Use OAuth 2.0 or Named Credential (JWT / Password flow)

Token expiry handling must be implemented on the client side

6.2 Data Payload Requirements

Each API call must conform to a defined JSON structure. Example:

{ "accountExternalId": "ACCT-123", "account": { "Name": "Acme Corp", "Type": "Client" }, "intake": { "ExternalIdc": "INTK-456", "Statusc": "New", "Accountr": { "ExternalIdc": "ACCT-123" } }, "matter": { "ExternalIdc": "MATT-789", "Typec": "Litigation", "Accountr": { "ExternalIdc": "ACCT-123" }, "Intaker": { "ExternalIdc": "INTK-456" } }, "matterRoles": [ { "Rolec": "Plaintiff", "Contactr": { "Email": "john.doe@example.com" }, "Matterr": { "ExternalIdc": "MATT-789" } } ] }

---

  1. Upsert & Linking Logic

Object External Key Notes

Account ExternalIdc Ensure duplicates are not created via ExternalIdc Intakec ExternalIdc Must link back to Accountc Matterc ExternalIdc Must link to both Accountc and Intakec if available Matter_Rolec Composite (Role + Contact + Matter) Ensure one role per Contact per Matter unless duplicates allowed

---

  1. Error Handling

Error Type Handling Strategy

Missing Reference Return 400 Bad Request with detail on missing lookup Duplicate Return 409 Conflict or silently update (if upsert logic applies) Validation Error Log and return full validation error string Partial Success Return 207 Multi-Status (if supported)

---

  1. Deployment & Testing

Changes must be validated in a Staging org first

All Apex code should include test coverage for each object insert/upsert path

Enable debug logs for the integration user during go-live

---

  1. Monitoring and Auditing

Use a custom object ImportLogc or IntegrationAuditc to log all imports

Track import timestamps, object references, and error states

Schedule regular dashboard/report to flag failed or partial imports

---

  1. Related Documents

Data Dictionary for Account, Matter, Intake, and MatterRole_c

API Field Mapping Sheet

API Auth Configuration Guide

Trigger and Handler Codebase Reference

Richard Schnitzler (rschnitzler@shield-legal.com)
2025-06-27 18:19:59

In this role, I will serve as a primary liaison between the client and the technical team, ensuring a seamless flow of information and support. My responsibilities include developing comprehensive documentation tailored to the client's operational and technical needs, providing detailed guidance and clarity on processes, and proactively assisting the client through each phase of implementation and ongoing maintenance. Additionally, I will generate and deliver regular and ad-hoc reports to support client decision-making, analyze trends, and identify areas for improvement. Throughout, I will ensure the client feels supported, informed, and confident in navigating our solutions.

Richard Schnitzler (rschnitzler@shield-legal.com)
2025-07-01 11:09:47

Salesforce is a powerful cloud-based software platform that helps businesses manage their relationships with clients, leads, and partners. At its core, it’s a Customer Relationship Management (CRM) tool.

Richard Schnitzler (rschnitzler@shield-legal.com)
2025-07-01 11:10:09

Litify is a specialized software built on top of Salesforce, specifically designed for law firms and legal service organizations. It takes Salesforce’s flexible foundation and adds legal-specific features

Richard Schnitzler (rschnitzler@shield-legal.com)
2025-07-01 14:36:33

public without sharing class AccountTriggerHandler implements TriggerHandler { private boolean triggerIsExecuting; private integer triggerSize; public AccountTriggerHelper helper; Id currentProfileId = UserInfo.getProfileId(); public AccountTriggerHandler(boolean triggerIsExecuting, integer triggerSize) { this.triggerIsExecuting = triggerIsExecuting; this.triggerSize = triggerSize; this.helper = new AccountTriggerHelper(); this.ObjectAutomation__c oa = ObjectAutomation__c.getInstance(currentProfileId); } if (!oa.Account_Override__c) { public void beforeInsert(List&lt;Account&gt; newAccounts) { // helper.doTask1(); // helper.doTask2(); } public void beforeUpdate(List&lt;Account&gt; oldAccounts, List&lt;Account&gt; newAccounts, Map&lt;ID, SObject&gt; oldAccountMap, Map&lt;ID, SObject&gt; newAccountMap) { // helper.doTask3(); // helper.doTask4(); } public void beforeDelete(List&lt;Account&gt; oldAccounts, Map&lt;ID, SObject&gt; oldAccountMap) { // helper.doTask5(); // helper.doTask1(); } public void afterInsert(List&lt;Account&gt; newAccounts, Map&lt;ID, SObject&gt; newAccountMap) { // helper.doTask2(); // helper.doTask3(); } public void afterUpdate(List&lt;Account&gt; oldAccounts, List&lt;Account&gt; newAccounts, Map&lt;ID, SObject&gt; oldAccountMap, Map&lt;ID, SObject&gt; newAccountMap) { // helper.doTask4(); // helper.doTask5(); } public void afterDelete(List&lt;Account&gt; oldAccounts, Map&lt;ID, SObject&gt; oldAccountMap) { // helper.doTask3(); // helper.doTask1(); } public void afterUndelete(List&lt;Account&gt; newAccounts, Map&lt;ID, SObject&gt; newAccountMap) { // helper.doTask4(); // helper.doTask2(); } } }

Richard Schnitzler (rschnitzler@shield-legal.com)
2025-07-01 14:41:56

trigger AccountTrigger on Account ( before insert, before update, before delete, after insert, after update, after delete, after undelete ) { Id profileId = UserInfo.getProfileId(); ObjectAutomationc oa = ObjectAutomationc.getInstance(profileId);

// Skip trigger logic if override is active
if (oa != null &amp;&amp; oa.Account_Override__c) {
    return;
}

AccountTriggerHandler handler = new AccountTriggerHandler(Trigger.isExecuting, Trigger.size);

if (Trigger.isBefore) {
    if (Trigger.isInsert) {
        handler.beforeInsert(Trigger.new);
    }
    if (Trigger.isUpdate) {
        handler.beforeUpdate(Trigger.old, Trigger.new, Trigger.oldMap, Trigger.newMap);
    }
    if (Trigger.isDelete) {
        handler.beforeDelete(Trigger.old, Trigger.oldMap);
    }
}

if (Trigger.isAfter) {
    if (Trigger.isInsert) {
        handler.afterInsert(Trigger.new, Trigger.newMap);
    }
    if (Trigger.isUpdate) {
        handler.afterUpdate(Trigger.old, Trigger.new, Trigger.oldMap, Trigger.newMap);
    }
    if (Trigger.isDelete) {
        handler.afterDelete(Trigger.old, Trigger.oldMap);
    }
    if (Trigger.isUndelete) {
        handler.afterUndelete(Trigger.new, Trigger.newMap);
    }
}

}

Richard Schnitzler (rschnitzler@shield-legal.com)
2025-07-09 10:19:04

https://www.youtube.com/watch?v=XEp3T_P0Qo4

YouTube
TANblaque (https://www.youtube.com/@TanBlaque)
Richard Schnitzler (rschnitzler@shield-legal.com)
2025-07-10 11:25:40

https://www.youtube.com/watch?v=RPjl64djdYo

YouTube
Big Willy Wilson (https://www.youtube.com/@bigwillywilson)
Richard Schnitzler (rschnitzler@shield-legal.com)
2025-07-10 15:18:39

https://youtube.com/shorts/YxkFR1lvet8?si=iqxE3IHjXiV_UZzu

YouTube
POCHI SCIENCE (https://www.youtube.com/@POCHI_SCIENCE)
Richard Schnitzler (rschnitzler@shield-legal.com)
2025-07-14 11:38:48

Certainly! Here's a professionally verbose version of the email you requested:


Subject: Follow-Up: Field Mapping Confirmation and Data Update Monitoring

Dear [Recipient's Name],

I hope this message finds you well.

I am writing to provide a brief but important update regarding the mapping initiative we previously discussed. Earlier today, I accessed the relevant Salesforce environment in order to personally verify the API names associated with the fields that were identified for mapping. This step was taken to ensure complete accuracy in the configuration and alignment of our integration logic, and to avoid any potential discrepancies between field labels and underlying schema definitions.

As part of this verification process, I was able to confirm the precise API names for the fields in question, including those associated with the phone number data point that has been central to our recent conversations. At this time, we are closely monitoring for the anticipated updates to the phone number field, as discussed.

Please rest assured that no data was altered during this review, and I will not be making any changes to the dataset or field-level values without first coordinating with your team. Should a need arise to perform any modifications, updates, or corrections to the data, I will be sure to reach out in advance to obtain the appropriate approvals and to ensure full transparency.

Thank you again for your continued collaboration. Please don’t hesitate to reach out if you have any questions or would like to review the confirmed API mappings in more detail.

Warm regards, [Your Full Name] [Your Job Title] [Your Company/Team Name] [Email Address] [Phone Number]

---

Let me know if you want to make it more concise or tailor it to a specific client or project.

Richard Schnitzler (rschnitzler@shield-legal.com)
2025-07-14 15:57:55

https://www.youtube.com/watch?v=ioS7AMXrEGI

YouTube
Generated Grooves - Topic (https://www.youtube.com/channel/UCSeRiELqVypz5ui9Un-wjZw)
Richard Schnitzler (rschnitzler@shield-legal.com)
2025-07-15 14:25:15

https://www.youtube.com/watch?v=BZk8UZ7a7uY

YouTube
crymelt (https://www.youtube.com/@crymelt)
Richard Schnitzler (rschnitzler@shield-legal.com)
2025-07-16 10:47:51

https://www.youtube.com/shorts/7QsyfrUhPPw

YouTube
WIGGLE¹⁰ (https://www.youtube.com/@WiggleDeca)
Richard Schnitzler (rschnitzler@shield-legal.com)
2025-07-16 13:11:39

https://www.youtube.com/watch?v=zhbXXzUTHyw

YouTube
crymelt (https://www.youtube.com/@crymelt)
Richard Schnitzler (rschnitzler@shield-legal.com)
2025-07-16 15:45:53

https://www.youtube.com/watch?v=wPDQgDs7Mww

YouTube
Little Kuriboh (https://www.youtube.com/@LittleKuribohChannel)
Richard Schnitzler (rschnitzler@shield-legal.com)
2025-07-17 13:26:55

Here's a professional and informative draft for your email:


Subject: Review of DL’s Salesforce User Licenses and Integration Accounts

Hi [Recipient's Name],

I wanted to bring to your attention a few key observations regarding DL’s current Salesforce user license utilization and integration setup.

At present, DL appears to have at least six or seven active user accounts that have never logged into the system. In addition to those, there are several other accounts that have not logged in for well over a year. While it's understandable that some dormant accounts may exist for operational or compliance reasons, the volume and duration of inactivity suggest there may be opportunities for cost savings or better user management.

Notably, a few of these inactive users appear to be associated with API integrations. In many cases, organizations choose to allocate a dedicated “API user” license for each system integration (e.g., web platforms, middleware, or ETL tools). However, best practice—and a potential cost optimization—would be to consolidate these into a single dedicated API user, with appropriate security and access controls configured to support all required integrations.

I’d recommend conducting a license usage review and, if appropriate, deactivating or repurposing unused accounts and consolidating API connections. We’re happy to assist with identifying impacted users and outlining a consolidation and cleanup plan if needed.

Please let me know if you’d like to discuss this in more detail or set up a quick review.

Best regards, [Your Name] [Your Title / Role] [Your Contact Info]

---

Let me know if you'd like to add usage data or a suggested next step.

Richard Schnitzler (rschnitzler@shield-legal.com)
2025-07-17 15:29:58

https://www.youtube.com/watch?v=UuLv8BxU6kA

YouTube
Micah_ers (https://www.youtube.com/@micah_ers)
Richard Schnitzler (rschnitzler@shield-legal.com)
2025-07-18 10:50:08

https://www.youtube.com/watch?v=ZOrF3Nnqegw

YouTube
brawlers world (https://www.youtube.com/@brwlers)
Richard Schnitzler (rschnitzler@shield-legal.com)
2025-07-21 16:23:40

public class ContentDocumentLinkHandler {

public static void preventDuplicateFileNames(List&lt;ContentDocumentLink&gt; newLinks) {
    Set&lt;Id&gt; parentRecordIds = new Set&lt;Id&gt;();
    Set&lt;Id&gt; newDocIds = new Set&lt;Id&gt;();

    for (ContentDocumentLink link : newLinks) {
        if (link.LinkedEntityId != null &amp;&amp; link.ContentDocumentId != null) {
            parentRecordIds.add(link.LinkedEntityId);
            newDocIds.add(link.ContentDocumentId);
        }
    }

    // Get titles of the incoming documents
    Map&lt;Id, String&gt; newDocTitles = new Map&lt;Id, String&gt;();
    for (ContentDocument doc : [SELECT Id, Title FROM ContentDocument WHERE Id IN :newDocIds]) {
        newDocTitles.put(doc.Id, doc.Title);
    }

    // Query existing file titles already attached to those parent records
    Map&lt;Id, Set&lt;String&gt;&gt; recordToTitles = new Map&lt;Id, Set&lt;String&gt;&gt;();
    for (ContentDocumentLink link : [
        SELECT LinkedEntityId, ContentDocument.Title
        FROM ContentDocumentLink
        WHERE LinkedEntityId IN :parentRecordIds
    ]) {
        if (!recordToTitles.containsKey(link.LinkedEntityId)) {
            recordToTitles.put(link.LinkedEntityId, new Set&lt;String&gt;());
        }
        recordToTitles.get(link.LinkedEntityId).add(link.ContentDocument.Title);
    }

    // Prevent duplicates
    for (ContentDocumentLink newLink : newLinks) {
        String newTitle = newDocTitles.get(newLink.ContentDocumentId);
        Set&lt;String&gt; existingTitles = recordToTitles.get(newLink.LinkedEntityId);

        if (existingTitles != null &amp;&amp; existingTitles.contains(newTitle)) {
            newLink.addError('A file with this exact name has already been attached to this record.');
        }
    }
}

}

Richard Schnitzler (rschnitzler@shield-legal.com)
2025-07-22 17:13:23

Here's a professional, corporate-style documentation draft titled:

---

Document Title: Standard Operating Procedure (SOP) for JSON Field Mapping and QA for LawRuler Campaigns

---

  1. Purpose

This document outlines the standard procedure for creating, validating, and quality-assuring JSON mappings used to associate campaign-level fields with question data within the LawRuler platform. This process ensures that all campaign-specific custom fields are accurately populated and extraneous or null fields are eliminated. Each campaign must be handled independently, while the matter and party structures remain standardized.

---

  1. Scope

This SOP applies to all Salesforce and LawRuler integration developers responsible for preparing and reviewing JSON payloads intended for campaign ingestion. The document covers:

Mapping campaign-specific fields

Inserting standardized matter and party information

Validating final JSON structure

Performing QA prior to deployment

---

  1. Responsibilities

Role Responsibility

Primary Developer Draft and prepare the campaign-specific JSON mapping. QA Developer Review and validate the JSON structure, check for data integrity and accuracy. Project Manager Ensure timely completion and logging of QA sign-off.

---

  1. Definitions

Campaign Fields: Custom metadata or input values unique to each marketing campaign in LawRuler.

Question Data: Structured form entries or data points collected via intake forms or lead sources.

Matter/Party Information: Static or standardized information applicable across all campaigns.

QA (Quality Assurance): Review process conducted by a secondary developer to ensure compliance.

---

  1. Process Steps

Step 1: Extract Campaign Field Requirements

Review campaign specifications and identify required fields.

Use LawRuler’s field documentation or admin UI to confirm valid field names and expected formats.

Step 2: Draft JSON Mapping

Begin with a clean JSON structure.

Insert question responses using campaign-specific fields and map them to the correct LawRuler keys.

Add the standard matter and party objects as structured in past campaigns. These do not vary.

Step 3: Sanitize the JSON

Remove any keys with null, empty, or whitespace-only values.

Confirm that no placeholder field names remain (e.g., “customfield1”).

Step 4: Internal QA Review

The QA Developer must:

Verify Field Matching: Ensure campaign fields are mapped to correct question data.

Check Field Validity: Confirm all custom fields are valid within LawRuler’s schema.

Validate JSON Structure: Use a linter or JSON validator to ensure syntax compliance.

Check for Extraneous Fields: Confirm no fields remain that do not have mapped values.

Confirm Matter/Party Integrity: Ensure uniform structure and values across campaigns.

Step 5: Approval and Handoff

QA Developer signs off by appending their name and timestamp in the tracking sheet or task.

JSON is now cleared for use in intake forms, API payloads, or LawRuler automation.

---

  1. Best Practices

Always use LawRuler’s latest field documentation for reference.

Maintain a template JSON structure with sample data for reuse.

Track all campaign mappings in a shared repository or spreadsheet.

Use source control to version mappings and log QA changes.

---

  1. Tools

Tool Purpose

JSON Validator Validate structure and syntax VS Code + Plugins Develop and lint JSON files Git or SharePoint Store approved mappings with history Campaign Tracker Spreadsheet to track mapping status

---

  1. Sample JSON Snippet

{ "campaignid": "12345", "customfields": { "referralsource": "TV", "injurytype": "Back", "incidentdate": "2025-07-01" }, "matter": { "type": "PI - Auto", "jurisdiction": "Nevada" }, "party": { "firstname": "John", "last_name": "Doe", "phone": "" } }

> Note: Keys like "injury_type" must reflect valid custom field names in LawRuler.

---

  1. Revision History

Date Version Description Author

2025-07-22 1.0 Initial draft [Your Name]

---

Would you like this formatted for Confluence, SharePoint, or in Markdown/Word/PDF format?

Richard Schnitzler (rschnitzler@shield-legal.com)
2025-07-24 14:07:38

https://www.youtube.com/watch?v=rovf9R5xA9M

YouTube
TendieTales (https://www.youtube.com/@TendieTales)
Richard Schnitzler (rschnitzler@shield-legal.com)
2025-07-29 16:43:03

'morgan & morgan', 'weitz & luxenberg', 'simmons hanly', 'lieff cabraser', 'robbins geller', 'kline & specter', 'dolman law group', 'morgan law firm', 'bergman draper', 'napoli shkolnik', 'baron & budd', 'motley rice', 'dicello', 'beasley allen', 'parker waichman', 'aylstock witkin', 'levin papantonio', 'matthews & associates', 'pintas & mullins', 'the lanier law firm', 'silva injury law', 'goza bocian', 'sokolove law', 'kazan law', 'goldberg persky', 'hagens berman', 'bernstein liebhard', 'eisbrouch marsh', 'under & shultz', 'sanders phillips', 'shepherd finkelman', 'williams kherkher', 'brown chiari', 'zonies sinars', 'the cochran firm', 'kopelowitz ostrow', 'shrader & associates', 'fleming nolen jez', 'girard sharp', 'ashcraft & gerel', 'farah & farah', 'injury lawyers 4u', 'the manning firm', 'ben crump', 'bailey glasser', 'acts law'

Richard Schnitzler (rschnitzler@shield-legal.com)
2025-08-05 12:53:52

https://www.youtube.com/watch?v=2V3GBLE_n98

YouTube
Rainbowside Hospital (https://www.youtube.com/@RainbowsideHospital)
Richard Schnitzler (rschnitzler@shield-legal.com)
2025-08-08 18:34:58

Lead # 818924

Richard Schnitzler (rschnitzler@shield-legal.com)
2025-08-08 18:57:11

lead=818924 Lead # 818943

Richard Schnitzler (rschnitzler@shield-legal.com)
2025-08-11 11:59:42

Work on backfill items that Sahil needs. ~Sahil Garg

• Get all sheets to zero and identify themes related to the reconciliation process. ~Edward Weber

• Pick an issue related to the campaigns or case types without integration and figure out how to solve it. Document the findings and move on to the next issue. ~Edward Weber

• Collaborate with Edward Weber to address the issue of missing campaigns or case types without integration. ~Sahil Garg

• Create the list of matters sent by Shield Legal that contain different fee arrangement IDs and send it to Abe for confirmation. ~Sahil Garg

• Create a new list of duplicate matters and confirm with the client which matters can be deleted. ~Sahil Garg

• Provide a list of matters where the back fill is not done yet for Sahil Garg. ~Edward Weber

• Separate the manual back fill matters into a different tab in the sheet. ~Edward Weber

• Sahil will create a list of matters with different rearrangement IDs in Edify compared to what the legal team sent, and send that to Abe. ~Sahil Garg

• Edward will do the data reconciliation today for the missing in MRT tab and for the duplicates, the specialized cases as well. ~Edward Weber

• Sahil will create a list of duplicate matters for client confirmation on whether some can be deleted. ~Sahil Garg

• Sahil Garg needs to work on the report that contains 3200 meters that need backfilling and share it with the team. ~Sahil Garg

• Figure out what's happening with the manual fixes for the special Ed cases, specifically for Naomi Thomas. ~Edward Weber

• Collaborate with Edward Weber to address the manual fixes for the special Ed cases, specifically for Naomi Thomas. ~Sahil Garg

• Validate the two cases as discussed and confirm which ones are duplicates. ~Sahil Garg

• Remove the confirmed duplicates from the list as discussed. ~Edward Weber

• Delete items called 'delete me' from the workspace. ~Sahil Garg

Richard Schnitzler (rschnitzler@shield-legal.com)
2025-08-14 14:33:20

Here’s a concise, professional QA checklist for JSON payloads you can reuse across integrations.

JSON Payload QA Checklist

1) Pre-flight & Transport

[ ] Encoding: UTF-8, no BOM; special characters properly escaped.

[ ] Content-Type: application/json (not form-encoded unless specified).

[ ] Authorization header present/valid if required; no secrets in payload.

[ ] Line endings & whitespace: No trailing commas; pretty vs compact per spec.

2) Structure & Schema (per firm-specific format)

[ ] Top-level shape matches the firm’s contract (object vs array, nesting, key names, case-sensitivity).

[ ] Required keys present; forbidden/unused keys omitted.

[ ] Types correct for each field (string/number/boolean/object/array).

[ ] Enum fields only contain allowed values (e.g., gender ∈ {M,F,Male,Female} per firm spec).

[ ] Date/time formats match contract (e.g., MM/DD/YYYY vs ISO-8601).

[ ] Empty objects/arrays only when allowed (if not allowed, omit key).

[ ] Order (if the firm requires a specific ordering).

3) Blank / Placeholder Detection

[ ] No blank fields: Reject "", " ", null, [], {} when value is required.

[ ] No missing-but-required fields: Key must exist + be non-empty.

[ ] No placeholder/custom markers: Reject values like "<CUSTOM**>", "{{...}}", "<<TBD>>", "REPLACEME", "N/A" where a real value is expected.

[ ] No default sentinel values: e.g., "000-00-0000" for SSN, "01/01/1900" for DOB, "0000000000" for phone unless contract explicitly allows.

4) Field-Level Validations

[ ] Names: Alphabetic + allowed punctuation; trimmed; proper casing if required.

[ ] Phone: Normalized to required format; digits count correct; country code as required.

[ ] Email: RFC-like validation; no disposable domains if disallowed.

[ ] SSN / National ID: Correct pattern (e.g., ^\d{9}$ or ^\d{3}-\d{2}-\d{4}$), not in banned ranges.

[ ] Postal codes: Match country-specific regex.

[ ] Monetary amounts: Numeric; correct precision/scale; currency code present if required.

[ ] Booleans: True booleans, not strings (true/false not "true"/"false") unless contract says strings.

[ ] Dates: Valid calendar dates; no future dates where not allowed (e.g., DOB).

[ ] IDs & Foreign Keys: Correct format/length; exist/known if cross-referenced.

5) Cross-Field Consistency Rules

[ ] Minor role vs DOB:

If role == "Minor" then age < threshold (e.g., <18 on today’s date).

If age < threshold, ensure a guardian/sponsor block exists and required guardian fields are populated.

If role != "Minor", assert age >= threshold or remove minor-only fields.

[ ] Address completeness: If any address field present, required companion fields present (street/city/state/postal/country).

[ ] Mutually exclusive fields: Only one of a set is present (e.g., callerssn XOR passportnumber if spec requires).

[ ] Derived values: Check consistency (e.g., caseopendate ≤ casecontractdate if required).

[ ] Name vs gender titles: prefix consistent with gender if the firm enforces it.

6) Custom Field Indicators vs Real Values

[ ] Mapping placeholders removed: No template tokens from mapping layer (e.g., ${Fieldc}, %FIELD%, {{Fieldc}}).

[ ] Field suffixes/prefixes normalized: Send firm’s canonical keys (e.g., CallerFirstName, not internal CallerFirstName__c unless the firm requires underscores).

[ ] Lookup displays vs IDs: Ensure you’re sending the value the firm expects (e.g., "New York" vs internal picklist code "NY", or vice versa per contract).

7) Referential Integrity & Duplicates

[ ] No duplicate entries in arrays where uniqueness is expected (e.g., phone numbers, document list).

[ ] Primary elements exist: Exactly one primary phone/email/contact if required.

[ ] Relationship links valid: Child objects carry the correct parent identifier and context.

8) Data Quality & Normalization

[ ] Trimmed strings: No leading/trailing whitespace.

[ ] Unicode safety: No invalid code points; normalized diacritics if required.

[ ] Case normalization: Emails lowercased; state/country in required case or ISO code.

[ ] Length limits: Respect max lengths (e.g., comments ≤ 2000 chars).

9) Security, PII, and Compliance

[ ] Only required PII present; no extra PII fields.

[ ] Masking where required (e.g., last-4 only) and transport security (TLS).

[ ] No secrets (API keys, internal IDs not in contract).

[ ] Auditability: Correlate payload with internal INTERNAL_ID/request ID for traceability.

10) Error Handling & Logging

[ ] Clear failure reasons surfaced in logs (which key, what rule failed).

[ ] Redact PII in logs.

[ ] Retry safety: Payloads are idempotent where needed (e.g., include dedupe key).

---

Quick Heuristics You Can Automate

Blank/placeholder catchers (examples):

Empty/whitespace: ^\s**$

Template tokens: {{.?}}|<CUSTOM.?>|<<.**?>>|REPLACEME|TBD|N/?A\b

Sentinel values: ^0+$ (phones/IDs), ^01/01/1900$ (dates)

DOB → age check (pseudo):

age = floor((today - dob)/365.2425)

Assert (role=="Minor") == (age < 18); if minor, require guardian block.

Schema conformance: Validate payload against the firm’s JSON Schema generated from mapping.json for that firm/case-type; run required, type, enum, format, and pattern checks.

---

Sign-off Gate (pass/fail before send)

[ ] Schema validation: PASS

[ ] Required fields present & non-blank: PASS

[ ] No placeholders/sentinels: PASS

[ ] Cross-field rules (incl. minor vs DOB): PASS

[ ] PII scope & masking: PASS

[ ] Contracted formats (dates, enums, phones): PASS

[ ] Logs clean (no PII, clear context): PASS

If you want, I can turn this into a reusable JSON Schema template plus a small validator script you can run in CI.

Richard Schnitzler (rschnitzler@shield-legal.com)
2025-08-14 16:10:46

handling the IPSAMEAS field

first_name_match = obj.mapping.get('CALLER_FIRST_NAME', '').lower().strip() == obj.mapping.get('IP_FIRST_NAME', '').lower().strip()
last_name_match = obj.mapping.get('CALLER_LAST_NAME', '').lower().strip() == obj.mapping.get('IP_LAST_NAME', '').lower().strip()
names_match = first_name_match and last_name_match
only_injured = obj.mapping.get('IP_LAST_NAME', '').strip() and not obj.mapping.get('CALLER_LAST_NAME', '').strip()
if names_match or only_injured:
    obj.mapping['IP_SAME_AS'] = 'CALLER'
    obj.mapping['IP_'] = ''
Richard Schnitzler (rschnitzler@shield-legal.com)
2025-08-14 16:13:03

You can wipe all IP… fields in one sweep once you’ve decided IPSAME_AS is "CALLER".

Here’s how you could drop it right after your existing block:

handling the IPSAMEAS field

firstnamematch = obj.mapping.get('CALLERFIRSTNAME', '').lower().strip() == obj.mapping.get('IPFIRSTNAME', '').lower().strip() lastnamematch = obj.mapping.get('CALLERLASTNAME', '').lower().strip() == obj.mapping.get('IPLASTNAME', '').lower().strip() namesmatch = firstnamematch and lastnamematch onlyinjured = obj.mapping.get('IPLASTNAME', '').strip() and not obj.mapping.get('CALLERLASTNAME', '').strip()

if namesmatch or onlyinjured: obj.mapping['IPSAMEAS'] = 'CALLER'

# Clear all IP_ fields (except IP_SAME_AS itself) if caller is the same
for key in list(obj.mapping.keys()):
    if key.startswith('IP_') and key != 'IP_SAME_AS':
        obj.mapping[key] = ''

Why this works

list(obj.mapping.keys()) is used so you can safely modify values while iterating.

It wipes every IP… key except IPSAME_AS itself.

Keeps the logic generic, so if more IP_ fields appear later, they’ll also be cleared without code changes.

If you’d rather remove the keys entirely instead of setting them to empty strings, you can replace:

obj.mapping[key] = ''

with:

del obj.mapping[key]

Do you want me to have it remove those keys completely or just blank them out before sending to AWKO?

Richard Schnitzler (rschnitzler@shield-legal.com)
2025-08-18 13:16:23

Can we replace \u00c2 & \u00a0

Richard Schnitzler (rschnitzler@shield-legal.com)
2025-08-18 13:22:27

def cleanstructure(data): """Recursively clean all strings in dicts/lists using cleantext.""" if isinstance(data, dict): return {k: cleanstructure(v) for k, v in data.items()} elif isinstance(data, list): return [cleanstructure(i) for i in data] elif isinstance(data, str): return clean_text(data) return data

Richard Schnitzler (rschnitzler@shield-legal.com)
2025-08-18 13:24:57

def clean_text(value: str) -> str: """ Remove or replace unwanted unicode characters from a string. Currently strips: - \u00C2 (Â) - \u00A0 (non-breaking space) """ if not isinstance(value, str): return value

return value.replace('\u00C2', '').replace('\u00A0', ' ')
Richard Schnitzler (rschnitzler@shield-legal.com)
2025-08-20 10:50:28

BLEH

Format all Dates MM/DD/YYYY

for key, question in obj.mapping.items():
    if isinstance(question,str) and re.fullmatch(r"\d{2}-\d{2}-\d{4}", question):
        obj.mapping[key] = question.replace("-","/") # TODO Not Woring - not replaced by /
Richard Schnitzler (rschnitzler@shield-legal.com)
2025-08-20 18:12:38

intakeresponse = requests.post(url, data=obj.mapping, headers=headers) WAS intakeresponse = requests.post(url, data=json.dumps(obj.mapping), headers=headers) Now this is Erroring caseid = intakeresponse.json()['Case_ID']

Richard Schnitzler (rschnitzler@shield-legal.com)
2025-08-20 18:15:59

print("AWKO response:", intake_response.text) # debug first

try: responsedata = intakeresponse.json() except ValueError: # Fallback: maybe JSON is wrapped in text responsedata = json.loads(intakeresponse.text)

caseid = responsedata.get('Case_ID')

Richard Schnitzler (rschnitzler@shield-legal.com)
2025-08-22 11:31:17

import re

def parse_address(address: str) -> dict: """ Parse a single-line US address into street, city, state, and ZIP. Handles irregular spacing, periods, and commas.

Example input:
    "123 Main St., Springfield IL 62704"
    "123 Main Street   ,   Los Angeles. CA.90001"

Returns:
    {
        "street": "123 Main St.",
        "city": "Springfield",
        "state": "IL",
        "zip": "62704"
    }
"""
if not address or not isinstance(address, str):
    return {"street": None, "city": None, "state": None, "zip": None}

# Normalize spacing and punctuation
clean = re.sub(r'\s+', ' ', address.strip())  # collapse multiple spaces
clean = clean.replace('.', ',')  # treat periods like commas for splitting
parts = [p.strip() for p in clean.split(',') if p.strip()]

# If splitting by commas fails (some addresses don't use commas)
if len(parts) &lt; 3:
    # Try splitting by spaces before last 2 tokens (state, zip)
    parts = re.split(r'\s(?=\S{2}\s+\d{5}(-\d{4})?$)', clean)

# Extract state &amp; ZIP from last part
state_zip_match = re.search(r'([A-Za-z]{2})\s+(\d{5}(?:-\d{4})?)$', parts[-1])
if not state_zip_match:
    return {"street": None, "city": None, "state": None, "zip": None}

state = state_zip_match.group(1)
zip_code = state_zip_match.group(2)

# Remove state/zip from last part for clean city
city_part = parts[-1].replace(state_zip_match.group(0), '').strip()
city = city_part if city_part else parts[-2]

# Street is everything before city
street = ', '.join(parts[:-2]) if len(parts) &gt; 2 else parts[0]

return {
    "street": street,
    "city": city,
    "state": state,
    "zip": zip_code
}
Richard Schnitzler (rschnitzler@shield-legal.com)
2025-08-22 11:35:04

print(parseaddress("123 Main St., Springfield IL 62704")) print(parseaddress("456 Elm Road , Los Angeles. CA.90001")) print(parse_address("789 Broadway New York NY 10001"))

Richard Schnitzler (rschnitzler@shield-legal.com)
2025-08-22 11:58:56

import re

def parse_address(address: str) -> dict: """ Parse a single-line US address into street, city, state, and ZIP. Handles: - Unit/Apt numbers (#, Apt, Suite) - Irregular spacing - Periods vs commas Returns a dict with keys: street, city, state, zip """ if not address or not isinstance(address, str): return {"street": None, "city": None, "state": None, "zip": None}

# Normalize: collapse spaces, treat periods as commas
clean = re.sub(r'\s+', ' ', address.strip())
clean = clean.replace('.', ',')
parts = [p.strip() for p in clean.split(',') if p.strip()]

# If splitting by commas fails, fallback to regex split before STATE ZIP
if len(parts) &lt; 3:
    parts = re.split(r'\s(?=\S{2}\s+\d{5}(-\d{4})?$)', clean)

# Extract state &amp; zip from last segment
state_zip_match = re.search(r'([A-Za-z]{2})\s+(\d{5}(?:-\d{4})?)$', parts[-1])
if not state_zip_match:
    return {"street": None, "city": None, "state": None, "zip": None}

state = state_zip_match.group(1)
zip_code = state_zip_match.group(2)

# City: what's left after removing state/zip from last part
city_candidate = parts[-1].replace(state_zip_match.group(0), '').strip()
city = city_candidate if city_candidate else parts[-2]

# Street: everything before city
street_raw = ', '.join(parts[:-2]) if len(parts) &gt; 2 else parts[0]

# Capture Apt/Suite/Unit from remaining string if present
# Look for patterns like "Apt 4B", "Suite 200", "#12"
unit_match = re.search(r'\b(?:Apt|Apartment|Suite|Ste|#)\s**\w+', street_raw, re.IGNORECASE)
if unit_match:
    # Move unit to end of street string (normalized)
    street = street_raw
else:
    street = street_raw

return {
    "street": street.strip(),
    "city": city.strip(),
    "state": state,
    "zip": zip_code
}
Richard Schnitzler (rschnitzler@shield-legal.com)
2025-08-22 12:09:02

import importlib import inspect import json import logging from datetime import datetime, UTC

import functions_framework import requests from flask import Request

from gcputils import functionmapping, IntegrationMappingObject

@functionsframework.http def main(request: Request): obj: IntegrationMappingObject = functionmapping(request) logger = logging.getLogger(f'{file}.{inspect.currentframe().fcode.coname}') casetype = obj.logrow['casetypeid'] postgres: IntegrationPostgres = obj.postgres

if not exists(f"{case_type}/mapping.json"):
    logger.warning('Case type %s not found', case_type)
    return ''

# If custom main.py exists for case type, run it
if exists(f"{case_type}/main.py"):
    importlib.import_module(f"{case_type}.main").main(obj)

 # Pull contract date from Postgres
obj.mapping['CASE_CONTRACT_DATE'] = postgres.query('SELECT MIN(date) as esign FROM lead_history_status WHERE leadid=%s AND tostatus=%s GROUP BY leadid', (obj.mapping['INTERNAL_ID'], 'Signed e-Sign'))[0]
obj.mapping['CASE_CONTRACT_DATE'] = obj.mapping['CASE_CONTRACT_DATE']['esign'].strftime('%m/%d/%Y')

handle_Milberg(obj)
<a href="http://logger.info">logger.info</a>('Finished')
return ''

def handle_Milberg(obj: IntegrationMappingObject) -> str: # Sends Records, hard errors bubble up to monitoring. token = '??????' # TODO: env/secret if not token: raise ValueError("Missing TOKEN") headers = { 'Authorization': f'Bearer {token}', }

url = 'https://'
intake_response = <a href="http://requests.post">requests.post</a>(url, data=obj.mapping, headers=headers)

# Log raw response for tracking
obj.postgres.update_response(obj.log_row['id'], f'{intake_response.__str__()} {intake_response.text}')

# Parse response and save case ID
case_id = intake_response.json()['Case_ID']
obj.postgres.update_external_id(obj.log_row['id'], case_id)
return case_id
Richard Schnitzler (rschnitzler@shield-legal.com)
2025-08-22 12:15:24

{ "test":"TEST", "parties": [ { "firstName": "c-25843", "lastName": "c-25845", "ShippingStreetAPI": "c-25852", "ShippingCityAPI": "c-25854", "ShippingStateAPI": "c-25855", "ShippingPostalCodeAPI": "c-25856", "phone": "c-25857", "email": "c-25859", "DateofBirth": "c-25848", "ssn": "c-25846", "ClientRelationship": "Self" }, { "firstName": "c-25864", "lastName": "c-25866", "ShippingStreetAPI": "c-28779", "ShippingCityAPI": "c-25874", "ShippingStateAPI": "c-25875", "ShippingPostalCodeAPI": "c-25876", "phone": "c-25878", "email": "c-25881", "DateofBirth": "c-25870", "ssn": "c-25868", "ClientRelationship": "Injured Party" }, { "firstName": "", "lastName": "", "ShippingStreetAPI": "", "ShippingCityAPI": "", "ShippingStateAPI": "", "ShippingPostalCodeAPI": "", "phone": "c-25886", "email": "", "ClientRelationship": "Emergency Contact" } ], "intakes": [ { "Source": "a00000000000000000", "caseType": "a00000000000000000", "intakeStatus": "Retainer Received", "SubStatus": "Under Review", "AttachmentFiles": [ { "name": "agreement.pdf", "Attachment": "base64" } ], "Question1": "", "Answer1": "", "Question22": "", "Answer22": "" } ] }

Richard Schnitzler (rschnitzler@shield-legal.com)
2025-08-22 12:21:57

import re

---------- cleaning ----------

def cleantext(v): if isinstance(v, str): # Remove \u00C2, replace NBSP with space return v.replace('\u00C2', '').replace('\u00A0', ' ') return v

def cleanstructure(data): if isinstance(data, dict): return {k: cleanstructure(v) for k, v in data.items()} if isinstance(data, list): return [cleanstructure(x) for x in data] if isinstance(data, str): return _cleantext(data) return data

---------- address parsing with unit ----------

Match "Apt 4B", "Apartment 2", "Unit 7", "Suite 210", "Ste 5", or "#12"

UNITRE = re.compile( r'(?:\b(?:Apt|Apartment|Unit|Suite|Ste)\b\s|#\s)([A-Za-z0-9-]+)', re.IGNORECASE )

def parseaddresswith_unit(addr: str): """ Returns (street, unit, city, state, zip) or (None, ...). Handles weird spacing; commas are primary separators; no global period->comma. """ if not addr or not isinstance(addr, str): return None, None, None, None, None

s = re.sub(r'\s+', ' ', addr.strip())

# Split by commas first (safe for "St. Louis")
parts = [p.strip() for p in s.split(',') if p.strip()]

# If we don’t have at least city and "STATE ZIP" in separate parts,
# fall back to a regex-based split right before STATE ZIP.
if len(parts) &lt; 2:
    parts = re.split(r'\s(?=\S{2}\s+\d{5}(?:-\d{4})?$)', s)

last = parts[-1] if parts else ''
m = re.search(r'([A-Za-z]{2})\s+(\d{5}(?:-\d{4})?)$', last)
if not m:
    return None, None, None, None, None

state = m.group(1)
zip_code = m.group(2)

# City: remaining text of last part without "STATE ZIP", or previous part
city_tail = last.replace(m.group(0), '').strip()
city = city_tail if city_tail else (parts[-2] if len(parts) &gt;= 2 else None)

# Street: everything before the city
if len(parts) &gt; 2:
    street_raw = ', '.join(parts[:-2])
elif len(parts) == 2:
    # [street, "City ST ZIP"]
    street_raw = parts[0]
else:
    # Regex-split fallback gave us [street-ish, "ST ZIP"]
    street_raw = parts[0] if parts else None

unit = None
if street_raw:
    um = _UNIT_RE.search(street_raw)
    if um:
        unit_text = um.group(0)             # e.g., "Apt 4B" or "#12"
        unit = unit_text.strip()
        street_raw = (street_raw[:um.start()] + street_raw[um.end():]).strip()
        street_raw = re.sub(r'\s{2,}', ' ', street_raw)

street = street_raw or None
return street, unit, city or None, state, zip_code

def streetwith_unit(street, unit): if not street: return None return f"{street} {unit}".strip() if unit else street

---------- apply to Milberg payload ----------

def setemergencycontactaddress(payload: dict, singlelineaddress: str) -> None: """ Mutates payload in place: - finds the Emergency Contact party (by ClientRelationship), falling back to parties[-1] - parses single_line_address - populates Shipping** fields on that party """ if not isinstance(payload, dict): raise TypeError("payload must be a dict") parties = payload.get('parties') or [] if not parties: raise ValueError("payload.parties is empty")

idx = next(
    (i for i, p in enumerate(parties)
     if (p or {}).get('ClientRelationship') == 'Emergency Contact'),
    len(parties) - 1
)

street, unit, city, state, zip_code = parse_address_with_unit(_clean_text(single_line_address))

# Only populate if we parsed something useful
if street or unit or city or state or zip_code:
    target = parties[idx]
    target['Shipping_Street_API'] = _street_with_unit(street, unit) or target.get('Shipping_Street_API', '')
    target['Shipping_City_API'] = city or target.get('Shipping_City_API', '')
    target['Shipping_State_API'] = state or target.get('Shipping_State_API', '')
    target['Shipping_Postal_Code_API'] = zip_code or target.get('Shipping_Postal_Code_API', '')
Richard Schnitzler (rschnitzler@shield-legal.com)
2025-08-25 12:53:53

🧱

Richard Schnitzler (rschnitzler@shield-legal.com)
2025-08-25 13:00:56

Payloads may contain files to be attached to an intake being created. Each attachment must be specified with the file name (with extension) and the file data encoded into a base 64 string. To attach files to an intake, add an array field to a member of the intakes array called 'AttachmentFiles'. Each member of the AttachmentFiles array is a file to be attached to the intake. Each member has 2 fields: name and Attachment. Specify the file name in the name field and the file data in the Attachment field. See the end of this document for an example.

Richard Schnitzler (rschnitzler@shield-legal.com)
2025-08-25 13:16:04

def addqacsvfromlist(payload: dict, qalist: list[dict], intakeidx: int = 0, csvfilename: str = "intakeqna.csv") -> None: buf = io.StringIO(newline="") writer = csv.writer(buf) writer.writerow(["Question", "Answer"]) for item in qalist: writer.writerow([item.get("q", "") or "", item.get("a", "") or ""]) csvbytes = buf.getvalue().encode("utf-8") attachments = ensureattachmentarray(payload, intakeidx) attachments.append(toattachment(csvfilename, csvbytes))

Richard Schnitzler (rschnitzler@shield-legal.com)
2025-08-25 13:18:13

import base64 import csv import io import os

def ensureattachmentarray(payload: dict, intakeidx: int = 0) -> list: intakes = payload.setdefault("intakes", []) while len(intakes) <= intakeidx: intakes.append({}) attachments = intakes[intakeidx].setdefault("AttachmentFiles", []) return attachments

def toattachment(name: str, rawbytes: bytes) -> dict: # No try/except, hard errors bubble up b64 = base64.b64encode(rawbytes).decode("ascii") return {"name": name, "Attachment": b64}

Richard Schnitzler (rschnitzler@shield-legal.com)
2025-08-25 16:31:54

⚠️

Richard Schnitzler (rschnitzler@shield-legal.com)
2025-08-25 16:32:10

The fields and values that can be submitted using a custom JSON object must follow a predetermined set of guidelines, as will be mentioned below, in order to function properly. Validation of the data is performed by the submission handler and returned (success and failure) with a JSON response. General Requirements ● Posting Information Posting URL: Post a matter: https://api.bridgify.com/venture/openapi/v1/matter? firstName=&lastName=&matterType=&phone=&email=&source= &address=&city=&state=&zipCode=&dob= &orderId=&vendor=&handlingFirm Method: POST Bay Point REST API Lead Data Import Usage, Formatting, and Definitions 1 Headers: ● productionBE6z6t52g1T9ZtPcc5M0uYjM ● Content-Type: application/json Post a document under a matter: https://api.bridgify.com/venture/openapi/v1/document/upload Method: POST Headers: ● productionBE6z6t52g1T9ZtPcc5M0uYjM Upload: Select ‘Body’, and select ‘form data’. Fill in matter id, and upload your file. matter_id: insert response ID from your recorded response when posting a matter file: upload your file

Richard Schnitzler (rschnitzler@shield-legal.com)
2025-08-25 16:37:44

Below is the Python I wish to update:

from gcputils import IntegrationMappingObject from main import discussedwithotherfirm

def main(obj: IntegrationMappingObject): data = obj.mapping intake = obj.lr_fields

matter_params = {
    'email': intake['16'],
    "matterType": "Sexual Abuse",
    "handlingFirm": "Stinar Gould Greico &amp; Hensley",
    "orderId": "6946121",
}

if not intake['c-25913']:
    data['facilityState'] = 'Illinois'

if intake['c-25873'] or intake['9']:
    data['address'] += f" {intake['c-25873'] or intake['9']}"

discussed_with_other_firm(obj, 'c-25895', 'c-25897')

return matter_params
Richard Schnitzler (rschnitzler@shield-legal.com)
2025-08-25 16:41:00

import os import re import json import logging from datetime import datetime from typing import Optional, Dict, Any, Tuple

import requests from requests.adapters import HTTPAdapter, Retry

from gcputils import IntegrationMappingObject from main import discussedwithotherfirm

-------- Config --------

BASEURL = "https://api.bridgify.com/venture/openapi/v1" MATTERURL = f"{BASEURL}/matter" DOCUPLOADURL = f"{BASEURL}/document/upload"

The header value you shared should NOT be hardcoded.

Put it in an environment variable: BRIDGIFYAPIKEY

def authheaders() -> Dict[str, str]: apikey = os.getenv("BRIDGIFYAPIKEY") if not apikey: raise RuntimeError("Missing env var BRIDGIFYAPIKEY") # Spec shows single custom header plus JSON content-type return { "productionBE6z6t52g1T9ZtPcc5M0uYjM": apikey, "Content-Type": "application/json", }

def session() -> requests.Session: s = requests.Session() retries = Retry( total=3, backofffactor=0.5, statusforcelist=(429, 500, 502, 503, 504), allowedmethods=("POST", "GET"), ) s.mount("https://", HTTPAdapter(max_retries=retries)) return s

-------- Helpers: normalization & validation --------

EMAILRX = re.compile(r"^[^@\s]+@[^@\s]+.[^@\s]+$") PHONERX = re.compile(r"\d")

def normemail(v: Optional[str]) -> Optional[str]: if not v: return None v = v.strip() return v if _EMAILRX.match(v) else None

def normphone(v: Optional[str]) -> Optional[str]: if not v: return None digits = "".join(PHONE_RX.findall(v)) if len(digits) == 11 and digits.startswith("1"): digits = digits[1:] return digits if len(digits) == 10 else None

def norm_state(v: Optional[str]) -> Optional[str]: return v.strip() if v else None

def norm_zip(v: Optional[str]) -> Optional[str]: if not v: return None digits = "".join(ch for ch in v if ch.isdigit()) if len(digits) in (5, 9): return digits return None

def normdatemmddyyyy(v: Optional[str]) -> Optional[str]: """ Accepts a variety of inputs and returns MM/DD/YYYY or None. """ if not v: return None v = v.strip() # Already in MM/DD/YYYY? try: dt = datetime.strptime(v, "%m/%d/%Y") return dt.strftime("%m/%d/%Y") except ValueError: pass

# Try ISO (YYYY-MM-DD)
try:
    dt = datetime.strptime(v, "%Y-%m-%d")
    return dt.strftime("%m/%d/%Y")
except ValueError:
    pass

# Try YYYY/MM/DD
try:
    dt = datetime.strptime(v, "%Y/%m/%d")
    return dt.strftime("%m/%d/%Y")
except ValueError:
    pass

return None

def require_fields(payload: Dict[str, Any], required: Tuple[str, ...]) -> None: missing = [k for k in required if not payload.get(k)] if missing: raise ValueError(f"Required field(s) missing or invalid: {', '.join(missing)}")

-------- Mapping from your intake/lr_fields to Bridgify params --------

def buildmatterpayload(obj: IntegrationMappingObject) -> Dict[str, Any]: """ Creates the JSON payload for POST /matter based on your IntegrationMappingObject. Applies custom rules you specified. """ data = obj.mapping # you already have this in your code intake = obj.lr_fields # Law Ruler (or source) fields

# Known intake keys from your snippet:
# '16' -&gt; email
# 'c-25913' -&gt; facility state presence flag
# 'c-25873' or '9' -&gt; extra address text
# 'c-25895' + 'c-25897' -&gt; discussed_with_other_firm helper uses these

# Map core fields (add/adjust keys to match your actual intake schema)
first_name = (intake.get("CALLER_FIRST_NAME") or intake.get("first_name") or intake.get("1") or "").strip()
last_name  = (intake.get("CALLER_LAST_NAME") or intake.get("last_name") or intake.get("2") or "").strip()
phone      = norm_phone(intake.get("CALLER_PHONE") or intake.get("phone") or intake.get("3"))
email      = norm_email(intake.get("16") or intake.get("email"))
source     = (intake.get("source") or intake.get("campaign") or "").strip()
address    = (intake.get("address") or intake.get("street") or intake.get("4") or "").strip()
city       = (intake.get("city") or intake.get("5") or "").strip()
state      = norm_state(intake.get("state") or intake.get("6"))
zip_code   = norm_zip(intake.get("zip") or intake.get("7"))
dob        = norm_date_mmddyyyy(intake.get("dob") or intake.get("8"))

# Your specific rules:
if not intake.get('c-25913'):
    # default facility state (keeping your original naming as facilityState,
    # but Bridgify's query shows "state" as a top-level parameter; we keep both:
    data['facilityState'] = 'Illinois'
    # do not override 'state' unless you intend to—leaving 'state' as-is

if intake.get('c-25873') or intake.get('9'):
    # Append extra location notes to address
    extra = intake.get('c-25873') or intake.get('9')
    if extra:
        address = f"{address} {extra}".strip()

# Call your shared helper (side effects as you designed)
discussed_with_other_firm(obj, 'c-25895', 'c-25897')

# Static values from your snippet
matter_type  = "Sexual Abuse"
handling_firm = "Stinar Gould Greico &amp; Hensley"
order_id      = "6946121"
vendor        = (intake.get("vendor") or "").strip()

payload = {
    "firstName": first_name or None,
    "lastName": last_name or None,
    "matterType": matter_type,             # required by your business rule
    "phone": phone,
    "email": email,
    "source": source or None,
    "address": address or None,
    "city": city or None,
    "state": state,
    "zipCode": zip_code,
    "dob": dob,
    "orderId": order_id,
    "vendor": vendor or None,
    "handlingFirm": handling_firm,
}

# Validate required fields for Bridgify (based on the querystring list they showed).
# If their API truly requires at least first/last and some contact path, enforce here:
require_fields(
    payload,
    ("firstName", "lastName", "matterType", "handlingFirm")
)

# Optional sanity check: at least one of phone/email present
if not (payload.get("phone") or payload.get("email")):
    raise ValueError("Either phone or email must be provided.")

return payload

-------- API calls --------

def postmatter(payload: Dict[str, Any]) -> Dict[str, Any]: """ POSTs JSON to /matter and returns parsed JSON response. """ session = _session() headers = _authheaders()

# Some vendors put params in query string; Bridgify doc shows them in the query,
# but the header also says Content-Type: application/json. They typically accept body JSON.
# We'll send JSON in the body per header guidance.
resp = <a href="http://session.post">session.post</a>(MATTER_URL, headers=headers, json=payload, timeout=20)
try:
    resp.raise_for_status()
except requests.HTTPError as e:
    # bubble up a readable message with server reply
    raise RuntimeError(f"Matter POST failed: {e} | body={resp.text[:500]}")

data = resp.json() if resp.headers.get("content-type","").startswith("application/json") else {}
# Expect something like {"success": true, "id": "..."} — adapt if different
if not data:
    raise RuntimeError("Matter POST returned no JSON.")
return data

def uploaddocument(matterid: str, filepath: str, filenameoverride: Optional[str] = None) -> Dict[str, Any]: """ Uploads a document with multipart/form-data to /document/upload. - matterid: ID returned from postmatter response - filepath: path to the file on disk """ session = _session() headers = _authheaders() # Remove JSON content-type so requests can set multipart boundary headers.pop("Content-Type", None)

name = filename_override or os.path.basename(file_path)
with open(file_path, "rb") as f:
    files = {
        "matter_id": (None, matter_id),
        "file": (name, f, "application/octet-stream"),
    }
    resp = <a href="http://session.post">session.post</a>(DOC_UPLOAD_URL, headers=headers, files=files, timeout=60)
try:
    resp.raise_for_status()
except requests.HTTPError as e:
    raise RuntimeError(f"Document upload failed: {e} | body={resp.text[:500]}")

return resp.json() if resp.headers.get("content-type","").startswith("application/json") else {"ok": True}

-------- Orchestration entrypoint (your main) --------

def main(obj: IntegrationMappingObject, **, documentpath: Optional[str] = None) -> Dict[str, Any]: """ - Builds & posts matter JSON - Optionally uploads a document to the created matter - Returns a consolidated result dict with IDs and raw API responses """ payload = buildmatterpayload(obj) matterresp = post_matter(payload)

# Expect a returned id—adapt the key names to the actual API response structure
matter_id = str(
    matter_resp.get("id")
    or matter_resp.get("matterId")
    or matter_resp.get("data", {}).get("id")
    or ""
).strip()

if not matter_id:
    # Some APIs return the id nested or under another key—log the whole thing to help debugging
    raise RuntimeError(f"Could not find matter id in response: {json.dumps(matter_resp)[:800]}")

doc_resp = None
if document_path:
    doc_resp = upload_document(matter_id, document_path)

# Keep a small, clean return for upstream logging/trace
return {
    "submitted_payload": payload,
    "matter_response": matter_resp,
    "matter_id": matter_id,
    "document_response": doc_resp,
}
Richard Schnitzler (rschnitzler@shield-legal.com)
2025-08-27 11:10:21

Here’s a polished email draft you can use:


Subject: Understanding the Difference Between Old and New Named Credentials in Salesforce

Hi [Recipient Name],

I wanted to share a quick overview of the changes Salesforce has introduced with the new Named Credentials compared to the legacy approach. This update is important if you’re working with external integrations and authentication within Salesforce.

What’s Changed?

Legacy Named Credentials

Used primarily to store a URL and authentication details (username, password, OAuth tokens).

Required separate Auth Providers for OAuth flows.

Limited flexibility for advanced scenarios like external services using multiple identities.

New Named Credentials (Enhanced)

Introduced as part of External Credentials and Authentication Principal framework.

Allows decoupling authentication from the endpoint, so the same external credential can be reused across multiple integrations.

Supports custom headers, parameterized endpoints, and dynamic authentication.

Offers better security alignment and future-proofing as Salesforce phases out the old model.

Key Benefits of the New Model

Easier management of multiple authentication methods.

Better alignment with modern security practices.

Future readiness, as the old model will eventually be deprecated.

You can review the full official documentation here: Salesforce Official Guide on Named Credentials

If you have any questions about migrating or setting these up, feel free to reach out—I’d be happy to assist.

Best regards, [Your Name] [Your Title]

---

Would you like me to also create a comparison table (Old vs New) to include in the email or as an attachment? It would make this much easier for non-technical stakeholders.

Richard Schnitzler (rschnitzler@shield-legal.com)
2025-08-27 15:45:22

GCP Integrations – Project Skeleton & Editing Guide

A minimal, firm‑agnostic guide to how the repository is laid out and how to add/edit campaigns.

---

1) Folder Layout (at a glance)

GCP integrations/ ├─ common/ # shared helpers (optional: parsing, attachments, logging) │ ├─ address.py # e.g., parse single-line address → city/state/zip │ ├─ files.py # e.g., add PDF/CSV to payload (base64 helpers) │ ├─ util.py # e.g., clean strings, omit None, etc. │ └─ init.py ├─ firmacme/ │ ├─ main.py # firm-level processor (shared URLs, headers, defaults) │ ├─ campaigns/ │ │ ├─ 001widget-a/ │ │ │ ├─ campaign.txt # human-readable name/notes │ │ │ ├─ questions.csv # exported question set │ │ │ ├─ questions.json # same questions as JSON │ │ │ ├─ mapping.json # field mapping for this campaign │ │ │ └─ main.py # campaign-specific logic (overrides/extra steps) │ │ ├─ 002widget-b/ │ │ │ ├─ campaign.txt │ │ │ ├─ questions.csv │ │ │ ├─ questions.json │ │ │ ├─ mapping.json │ │ │ └─ main.py │ │ └─ ... │ └─ README.md # short notes for this firm (tokens/URLs/quirks) ├─ firmfoobar/ │ ├─ main.py │ └─ campaigns/... └─ root_main.py # Cloud Function entry: routes request → firm → campaign

> Names don’t have to be exactly this; keep the spirit: root → firm → campaign.

---

2) How the function routes work

  1. HTTP request hits root_main.py.

  2. rootmain.py resolves the firm and campaign number from the request (via functionmapping(request) or similar).

  3. If a campaign folder has a main.py, run it. Otherwise, fall back to the firm’s main.py.

  4. Firm main.py can set URLs, headers, and shared steps; campaign main.py can add/override logic.

This lets every firm share defaults, while each campaign can tweak mapping and behavior.

---

3) What lives in each campaign folder

campaign.txt — Plain‑text label and any analyst notes.

questions.csv — Tabular questions used by marketing/ops.

questions.json — Same questions in JSON (easier for code to consume).

mapping.json — Key/value mapping from your intake fields to the firm’s schema.

main.py — Optional campaign‑specific code (e.g., pre/post transforms, payload assembly).

Keep it simple: if there’s no campaign main.py, the firm’s main.py handles everything.

---

4) Editing a mapping (fast path)

  1. Open the campaign’s mapping.json.

  2. Add or change field keys to match the firm’s required payload.

  3. If you added new inputs, make sure your code pulls them from obj.mapping (or your equivalent input dict) before assembling the payload.

  4. If no code change is needed, you’re done. If new logic is needed (e.g., formatting a date), add it to the campaign main.py (preferred) or the firm main.py if all campaigns need it.

> Rule of thumb: campaign‑specific rule → campaign main.py; shared across campaigns → firm main.py.

---

5) Minimal firm main.py (template)

firm_<name>/main.py

import json from common.util import cleanstructure, omitnone from common.files import addpdfattachmentfrompath, addqacsvfromintake_fields

FIRM_URL = "https://example.com/intake" HEADERS = {"Authorization": "Bearer <token>", "Content-Type": "application/json"}

def buildpayload(obj) -> dict: # Pull base mapping for this campaign (already decided by router) mapping = obj.mapping # provided by functionmapping(request)

# Assemble minimal firm-agnostic shape
payload = {
    "test": mapping.get("TEST_FLAG", ""),
    "parties": mapping.get("parties", []),
    "intakes": mapping.get("intakes", [{}])
}

# Optional normalization
return omit_none(clean_structure(payload))

def send(payload, requests): return requests.post(FIRM_URL, json=payload, headers=HEADERS)

Keep this tiny; let campaigns override build_payload if they need a different schema.

---

6) Minimal campaign main.py (template)

firm<name>/campaigns/001foo/main.py

from common.address import setemergencycontactaddress from common.files import addpdfattachmentfrompath, addqacsvfromintakefields from common.util import cleanstructure, omitnone

def main(obj): # 1) Start from firm payload payload = obj.firm.build_payload(obj) # or import firm builder directly

# 2) Campaign tweaks (optional)
ec_addr = obj.mapping.get("EC_ADDRESS", "")
set_emergency_contact_address(payload, ec_addr)

# Attachments (optional)
# add_pdf_attachment_from_path(payload, obj.mapping.get("PDF_PATH", ""))
# add_qa_csv_from_intake_fields(payload, intake_idx=0, csv_filename="intake_qna.csv")

payload = omit_none(clean_structure(payload))

# 3) Send
resp = obj.firm.send(payload, obj.requests)

# 4) Persist
obj.postgres.update_response(obj.log_row['id'], f"{resp.status_code} {resp.text}")
case_id = resp.json().get('Case_ID')  # may vary by firm
if case_id:
    obj.postgres.update_external_id(obj.log_row['id'], case_id)

> In your router you can inject obj.firm (the imported firm module) and obj.requests so campaign code stays tiny.

---

7) Adding a new firm

  1. Create firm_<name>/ with a main.py and campaigns/ folder.

  2. Put shared constants (URLs, headers, defaults) in the firm main.py.

  3. Add a short README.md with any special notes (tokens, expected fields, quirks).

---

8) Adding a new campaign

  1. Create firm<name>/campaigns/<###><slug>/.

  2. Drop in:

campaign.txt

questions.csv and/or questions.json

mapping.json

main.py (optional; only if you need special behavior)

  1. Update your router (if needed) to recognize the new campaign number.

---

9) Common utilities (optional)

Place light, reusable helpers under common/:

address.py – parseaddresswithunit(), setemergencycontactaddress()

files.py – addpdfattachmentfrompath(), addqacsvfromintake_fields()

util.py – cleanstructure(), omitnone(), clean_text()

This keeps campaign code short and consistent across firms.

---

10) Editing questions

Update questions.csv for analysts; keep questions.json as the source for code.

If your campaign main.py reads questions, import the JSON and map to payload fields.

If you want a CSV of Q/A attached to the intake, call addqacsvfromintake_fields(...).

---

11) Safety & conventions

Keep tokens/URLs out of campaign folders; store them in the firm main.py (or env/Secret Manager).

Campaigns should be data-first (mapping + files). Only add code when necessary.

Prefer small, testable helpers; avoid large coupled modules.

---

12) Minimal router notes (root_main.py)

Resolve firm and campaign from the request (headers/body/path).

Import firm module: firmmod = importlib.importmodule(f"firm_{firm}.main").

If campaign has main.py, import and call campaign.main(obj); else call firmmod.buildpayload → firm_mod.send.

Persist responses/IDs via obj.postgres.

That’s it. Keep this skeleton lean; fill in only what a firm or campaign truly needs.

Done. I put together a lean, firm-agnostic “GCP Integrations – Project Skeleton & Editing Guide” with:

a simple repo structure (root → firm → numbered campaigns),

what belongs in each folder/file,

tiny firm/campaign main.py templates,

notes on routing, adding firms/campaigns, and common utilities.

Want me to add a one-page “Quick Start” checklist or a sample root_main.py router stub next?

Richard Schnitzler (rschnitzler@shield-legal.com)
2025-08-28 17:03:22

📘

Richard Schnitzler (rschnitzler@shield-legal.com)
2025-08-28 17:03:33

import base64 import csv import importlib import inspect import io import logging from os.path import exists

import functions_framework import requests from flask import Request

from cloudpostgres import IntegrationPostgres from gcputils import functionmapping, IntegrationMappingObject # , parseaddresswithunit from scriptTest import parseaddresswith_unit

AUTHPARAMS: dict[str, str] = { 'granttype': 'refreshtoken', 'clientid': '', 'client_secret': '', }

@functionsframework.http def main(request: Request): obj: IntegrationMappingObject = functionmapping(request) logger = logging.getLogger(f'{file}.{inspect.currentframe().fcode.coname}') casetype = obj.logrow['casetypeid'] postgres: IntegrationPostgres = obj.postgres

if not exists(f"{case_type}/mapping.json"):
    logger.warning('Case type %s not found', case_type)
    return ''

# If custom main.py exists for case type, run it
if exists(f"{case_type}/main.py"):
    importlib.import_module(f"{case_type}.main").main(obj)

add_qa_csv_from_list()
handle_Milberg(obj)
<a href="http://logger.info">logger.info</a>('Finished')
return ''

def handleMilberg(obj: IntegrationMappingObject) -> str: # Sends Records, hard errors bubble up to monitoring. headers = getheaders() url = 'https://spglawfirm.my.salesforce.com/services/apexrest/Mia/' intake_response = requests.post(url, data=obj.mapping, headers=headers)

# Log raw response for tracking
obj.postgres.update_response(obj.log_row['id'], f'{intake_response.__str__()} {intake_response.text}')

# Parse response and save case ID
case_id = intake_response.json()['intakes'][0]['caseType']
obj.postgres.update_external_id(obj.log_row['id'], case_id)
return case_id

def streetwithunit(street, unit): if not street: return None return f"{street} {unit}".strip() if unit else street

def addqacsvfromlist(payload: dict, qalist: list[dict], intakeidx: int = 0, csvfilename: str = "intakeqa.csv") -> None: buf = io.StringIO(newline="") writer = csv.writer(buf) writer.writerow(["Question", "Answer"]) for item in qalist: writer.writerow([item.get("q", "") or "", item.get("a", "") or ""]) csvbytes = buf.getvalue().encode("utf-8") attachments = ensureattachmentarray(payload, intakeidx) attachments.append(toattachment(csvfilename, csvbytes))

def ensureattachmentarray(payload: dict, intakeidx: int = 0) -> list: intakes = payload.setdefault("intakes", []) while len(intakes) <= intakeidx: intakes.append({}) attachments = intakes[intake_idx].setdefault("AttachmentFiles", []) return attachments

def toattachment(name: str, rawbytes: bytes) -> dict: # No try/except, hard errors bubble up b64 = base64.b64encode(raw_bytes).decode("ascii") return {"name": name, "Attachment": b64}

def getheaders(authparams: dict[str, str]) -> dict[str, str] | tuple[str, int]: # Salesforce Authentication Credentials auth_url = "https://login.salesforce.com/services/oauth2/token"

# Authenticate with Salesforce
auth_response = requests.post(auth_url, data=auth_params)
if auth_response.status_code != 200:
    print("intake authentication", auth_response.text, auth_response.status_code)
    return auth_response.text, auth_response.status_code

# Retrieve access token
access_token = auth_response.json()['access_token']
if not access_token:
    return "Failed to retrieve access token.", 500

headers = {
    'Authorization': f"Bearer {access_token}",
    'verification_token': 'ShieldLegal',
    'Content-Type': 'application/json'
}
return headers

def splitnamerole(obj: IntegrationMappingObject, roledescription: str, fullnamefield: str): role = findrolefromdescription(obj, roledescription) splitnameobj(obj, role, fullname_field)

def splitnameobj(obj: IntegrationMappingObject, objwithname: dict[str, str], fullnamefield: str): if not objwithname['lastName'] and fullnamefield: fullname = obj.lrfields[fullnamefield].strip() parts = fullname.split(" ") # Split on the spaces if len(parts) > 1: objwithname["firstName"] = parts[0] objwith_name["lastName"] = parts[-1]

def findrolefromdescription(obj: IntegrationMappingObject, roledescription: str) -> dict[str, str]: data = obj.mapping intake = obj.lr_fields

for role in data["roles"]:
    if role["roleDescription"] == role_description:
        return role
else:
    raise ValueError(f"[{intake['Lead ID']} / {intake['Lead Data']['CaseTypeId']}] Role {role_description} not found in roles")
Richard Schnitzler (rschnitzler@shield-legal.com)
2025-08-28 17:04:06

JSON: { "General Case Questions": { "When were you incarcerated at Chowchilla? (List all date ranges)": "c-25921", "Did the abuse involve inappropriate sexual touching or sexual contact of any type?": "c-25922", "Were you sexually abused by an officer, guard, or warden at Chowchilla Correctional Facility?": "c-25923", "When did the abuse happen? (AT LEAST MONTH/YEAR)": "c-25924", "Would the injured party be able to recognize the name of their abuser if they heard it?": "c-25925", "Signer's First & Last Name:": "c-25833", "Would the injured party be able to recognize their abuser if shown a picture of them?": "c-25926", "Can the injured party recall the name of their abuser?": "c-25927", "How many abusers were there?": "c-25928", "Signer's Suffix:": "c-25837", "(Abuser 1) Sex:": "c-25930", "(Abuser 1) Age:": "c-25931", "(Abuser 1) Race/Ethnicity:": "c-25932", "(Abuser 1) Name (Full name if available):": "c-25933", "(Abuser 1) If Other/Multiracial:": "c-25934", "(Abuser 1) Height:": "c-25935", "(Abuser 1) Weight/Build:": "c-25936", "(Abuser 1) Hair:": "c-25937", "(Abuser 1) Eyes:": "c-25938", "(Abuser 1) Distinguishing Marks/Tattoos/Characteristics:": "c-25939", "(Abuser 1) Other Notes:": "c-25940", "(Abuser 1) Role at the facility:": "c-25941", "(Abuser 2) Name (Full name if available):": "c-25943", "(Abuser 2) Role at the facility:": "c-25944", "VSS Live Transfer:": "c-22043", "If yes, Date of Death:": "c-25842", "(Abuser 2) Sex:": "c-25945", "File Posted to VSS:": "c-22044", "(Abuser 2) Age:": "c-25946", "Signer's First Name:": "c-25843", "(Abuser 2) Race/Ethnicity:": "c-25947", "(Abuser 2) If Other/Multiracial:": "c-25948", "Signer's Middle Name:": "c-25844", "(Abuser 2) Height:": "c-25949", "(Abuser 2) Weight/Build:": "c-25950", "Signer's Last Name:": "c-25845", "(Abuser 2) Hair:": "c-25951", "(Abuser 2) Eyes:": "c-25952", "Signer's SSN:": "c-25846", "(Abuser 2) Distinguishing Marks/Tattoos/Characteristics:": "c-25953", "(Abuser 2) Other Notes:": "c-25954", "Last Four of Signer's SSN:": "c-25847", "Signer's Date of Birth:": "c-25848", "Injured Party's Prefix:": "c-25849", "(Abuser 3) Name (Full name if available):": "c-25956", "(Abuser 3) Role at the facility:": "c-25957", "Injured Party's Full Name:": "c-25850", "(Abuser 3) Sex:": "c-25958", "(Abuser 3) Age:": "c-25959", "Signer's Full Mailing Address:": "c-25851", "(Abuser 3) Race/Ethnicity:": "c-25960", "(Abuser 3) If Other/Multiracial:": "c-25961", "Signer's Mailing Address Line 1 (123 Main St.):": "c-25852", "(Abuser 3) Height:": "c-25962", "(Abuser 3) Weight/Build:": "c-25963", "Signer's Mailing Address Line 2 (APT. 1):": "c-25853", "(Abuser 3) Hair:": "c-25964", "(Abuser 3) Eyes:": "c-25965", "Signer's City:": "c-25854", "(Abuser 3) Distinguishing Marks/Tattoos/Characteristics:": "c-25966", "(Abuser 3) Other Notes:": "c-25967", "Signer's State:": "c-25855", "(Abuser 4+) Any additional abusers, list them here:\nMake sure to include:\nName:\nRole at facility:\nWhich facility the abuser was located:\nWhere first encountered the abuser:\nSex:\nAge:\nRace/Ethnicity:\nHeight:\nWeight/Build:\nHair:\nEyes:\nDistinguishing Marks/Tattoos/Characteristics:": "c-25968", "(Abuser 1) How many instances of abuse were there?": "c-25971", "Signer's Zip Code:": "c-25856", "(Abuser 1) Where in the facility did each incident of abuse take place?": "c-25972", "Signer's Primary Phone Number:": "c-25857", "(Abuser 1)\u00a0Can you please provide details about how each incident occurred?": "c-25973", "Signer's Secondary Phone Number:": "c-25858", "(Abuser 1) Where did you first encounter the abuser?": "c-25974", "Signer's Primary Email Address:": "c-25859", "(Abuser 2) How many instances of abuse were there?": "c-25975", "(Abuser 2) Where in the facility did each incident of abuse take place?": "c-25976", "Injured Party's Suffix:": "c-25861", "(Abuser 2) Can you please provide details about how each incident occurred?": "c-25977", "What is the best day to contact you?": "c-25862", "(Abuser 2) Where did you first encounter the abuser?": "c-25978", "Injured Party's First Name:": "c-25864", "(Abuser 3) Where did you first encounter the abuser?": "c-25979", "Injured Party's Middle Name:": "c-25865", "(Abuser 3) How many instances of abuse were there?": "c-25980", "Injured Party's Last Name:": "c-25866", "(Abuser 3) Where in the facility did each incident of abuse take place?": "c-25981", "What is your time zone?": "c-25867", "(Abuser 3) Can you please provide details about how each incident occurred?": "c-25982", "Injured Party's SSN:": "c-25868", "(Abuser 4+) List incidents of abuse here:\nMake sure to include:\nNature of abuse:\nHow many instances of abuse:\nWhere in the facility did each incident of abuse take place:\nDetails of abuse:": "c-25983", "Last Four of Injured Party's SSN:": "c-25869", "(Abuser 1) What is the most severe type of sexual abuse that you experienced with this abuser?": "c-25984", "Injured Party's Date of Birth:": "c-25870", "(Abuser 2) What is the most severe type of sexual abuse that you experienced with this abuser?": "c-25985", "Injured Party's Full Mailing Address:": "c-25871", "(Abuser 3) What is the most severe type of sexual abuse that you experienced with this abuser?": "c-25986", "Injured Party's Mailing Address Line 1 (123 Main St.):": "c-25872", "Injured Party's Mailing Address Line 2 (APT. 1):": "c-25873", "Injured Party's City:": "c-25874", "Injured Party's State:": "c-25875", "Injured Party's Zip Code:": "c-25876", "What is the best time to contact you?": "c-25877", "Injured Party's Primary Phone Number:": "c-25878", "Injured Party's Secondary Phone Number:": "c-25879", "Injured Party's Primary Email Address:": "c-25881", "Emergency Contact Name:": "c-25882", "Emergency Contact Relationship:": "c-25883", "Injured Party's Marital Status:": "c-25884", "Emergency Contact Address:": "c-25885", "Emergency Contact Phone Number:": "c-25886", "Did you report the abuse to anyone?": "c-26004", "If so, to whom?": "c-26005", "Was there an official report?": "c-26006", "Notes:": "c-25890", "Injured Party's Mailing Address Lines 1 & 2 (ex. 123 Tort Way, Unit 1):": "c-28779", "Injured Party's City, State, Zip (ex. Las Vegas, NV 89123):": "c-28780", "Signer's Relationship to Injured Party:": "c-25891", "Signer's Prefix:": "c-25892", "Are you currently represented by another attorney for this matter?": "c-25895", "Have you spoken to another attorney about this matter?": "c-25896", "Have you signed any documents with another law firm regarding this matter?": "c-25897", "Is the Injured Party deceased?": "c-25898", "Injured Party's Gender:": "c-25899", "Signer's Gender": "c-25900", "Do we have permission to contact this person if we are unable to get a hold of you?": "c-25902", "Do we have permission to reference your case to this person? (We will not discuss your case with anyone without your permission)": "c-25903", "Were you incarcerated at Chowchilla Correctional Facility?": "c-29278", "(Abuser 1) Facility the abuser was located:": "c-26064", "(Abuser 2) Facility the abuser was located:": "c-26066", "(Abuser 3) Facility the abuser was located:": "c-26068", "Was there skin-to-skin contact?": "c-29281", "Did the abuser wear gloves?": "c-29282", "Did anything else happen that you believe is important for us to know?": "c-29283", "Is there any additional information you would like to provide the attorneys regarding your claim?": "c-29284", "Are you currently incarcerated?": "c-29285", "Did you go by another alias while at Chowchilla? If so, please list any aliases, including maiden/married names while at Chowchilla:": "c-29286", "(Abuser 1) Did the abuser wear gloves?": "c-26427", "(Abuser 1) Was there skin-to-skin contact?": "c-26428", "(Abuser 2) Did the abuser wear gloves?": "c-26429", "(Abuser 2) Was there skin-to-skin contact?": "c-26430", "(Abuser 3) Did the abuser wear gloves?": "c-26431", "(Abuser 3) Was there skin-to-skin contact?": "c-26432", "Can the injured party provide a detailed description of their abuser?": "c-29237", "What is the name of the facility you were incarcerated in?": "c-29321" }, "Form Fields": { "S1": "c-10042", "ALT ID": "c-10043", "Comments": "c-5069", "Consent1:": "c-25542", "Consent2:": "c-25543", "Were You Incarcerated at\u00a0Chowchilla\u00a0Womens Prison": "c-5058", "Did you experience sexual abuse while incarcerated by probation officer, guard or staff": "c-5059", "Age at time of abuse": "c-5060", "Year abuse occurred": "c-5061", "bmid": "c-27767", "accountid": "c-27768", "datasetid": "c-27769", "At which prison did you experience sexual abuse while incarcerated by probation officer, guard or staff:": "c-29275", "Did the abuse occur AFTER 2009:": "c-29276", "Do you have a lawyer:": "c-29277" }, "Screener": { "Screening Completed:": "c-23028", "Screening Verified:": "c-23029", "Transferred to Closer:": "c-23030", "Verified By:": "c-23031", "Submit Screen:": "c-23032", "Are you sure you want to submit screen?": "c-23033", "Are you currently represented by another attorney for this matter?": "c-25895", "Have you spoken to another attorney about this matter?": "c-25896", "Have you signed any documents with another law firm regarding this matter?": "c-25897", "Were you incarcerated at Chowchilla Correctional Facility?": "c-27027", "What years was the injured party at Chowchilla/VSP/CIW?": "c-27770" } }

Richard Schnitzler (rschnitzler@shield-legal.com)
2025-08-28 17:23:14
Richard Schnitzler (rschnitzler@shield-legal.com)
2025-08-28 17:38:43

import base64 import csv import importlib import inspect import io import logging from os.path import exists

import functions_framework import requests from flask import Request

from cloudpostgres import IntegrationPostgres from gcputils import functionmapping, IntegrationMappingObject # , parseaddresswithunit from scriptTest import parseaddresswith_unit

AUTHPARAMS: dict[str, str] = { 'granttype': 'refreshtoken', 'clientid': '', 'client_secret': '', }

@functionsframework.http def main(request: Request): obj: IntegrationMappingObject = functionmapping(request) logger = logging.getLogger(f'{file}.{inspect.currentframe().fcode.coname}') casetype = obj.logrow['casetypeid'] postgres: IntegrationPostgres = obj.postgres

if not exists(f"{case_type}/mapping.json"):
    logger.warning('Case type %s not found', case_type)
    return ''

# If custom main.py exists for case type, run it
if exists(f"{case_type}/main.py"):
    importlib.import_module(f"{case_type}.main").main(obj)

add_qa_csv_from_list()
handle_Milberg(obj)
<a href="http://logger.info">logger.info</a>('Finished')
return ''

def handleMilberg(obj: IntegrationMappingObject) -> str: # Sends Records, hard errors bubble up to monitoring. headers = getheaders() url = 'https://spglawfirm.my.salesforce.com/services/apexrest/Mia/' intake_response = requests.post(url, data=obj.mapping, headers=headers)

# Log raw response for tracking
obj.postgres.update_response(obj.log_row['id'], f'{intake_response.__str__()} {intake_response.text}')

# Parse response and save case ID
case_id = intake_response.json()['intakes'][0]['caseType']
obj.postgres.update_external_id(obj.log_row['id'], case_id)
return case_id

def streetwithunit(street, unit): if not street: return None return f"{street} {unit}".strip() if unit else street

def addqacsvfromlist(payload: dict, qalist: list[dict], intakeidx: int = 0, csvfilename: str = "intakeqa.csv") -> None: buf = io.StringIO(newline="") writer = csv.writer(buf) writer.writerow(["Question", "Answer"]) for item in qalist: writer.writerow([item.get("q", "") or "", item.get("a", "") or ""]) csvbytes = buf.getvalue().encode("utf-8") attachments = ensureattachmentarray(payload, intakeidx) attachments.append(toattachment(csvfilename, csvbytes))

def ensureattachmentarray(payload: dict, intakeidx: int = 0) -> list: intakes = payload.setdefault("intakes", []) while len(intakes) <= intakeidx: intakes.append({}) attachments = intakes[intake_idx].setdefault("AttachmentFiles", []) return attachments

def toattachment(name: str, rawbytes: bytes) -> dict: # No try/except, hard errors bubble up b64 = base64.b64encode(raw_bytes).decode("ascii") return {"name": name, "Attachment": b64}

def getheaders(authparams: dict[str, str]) -> dict[str, str] | tuple[str, int]: # Salesforce Authentication Credentials auth_url = "https://login.salesforce.com/services/oauth2/token"

# Authenticate with Salesforce
auth_response = requests.post(auth_url, data=auth_params)
if auth_response.status_code != 200:
    print("intake authentication", auth_response.text, auth_response.status_code)
    return auth_response.text, auth_response.status_code

# Retrieve access token
access_token = auth_response.json()['access_token']
if not access_token:
    return "Failed to retrieve access token.", 500

headers = {
    'Authorization': f"Bearer {access_token}",
    'verification_token': 'ShieldLegal',
    'Content-Type': 'application/json'
}
return headers

def splitnamerole(obj: IntegrationMappingObject, roledescription: str, fullnamefield: str): role = findrolefromdescription(obj, roledescription) splitnameobj(obj, role, fullname_field)

def splitnameobj(obj: IntegrationMappingObject, objwithname: dict[str, str], fullnamefield: str): if not objwithname['lastName'] and fullnamefield: fullname = obj.lrfields[fullnamefield].strip() parts = fullname.split(" ") # Split on the spaces if len(parts) > 1: objwithname["firstName"] = parts[0] objwith_name["lastName"] = parts[-1]

def findrolefromdescription(obj: IntegrationMappingObject, roledescription: str) -> dict[str, str]: data = obj.mapping intake = obj.lr_fields

for role in data["roles"]:
    if role["roleDescription"] == role_description:
        return role
else:
    raise ValueError(f"[{intake['Lead ID']} / {intake['Lead Data']['CaseTypeId']}] Role {role_description} not found in roles")
Richard Schnitzler (rschnitzler@shield-legal.com)
2025-08-28 17:45:27

Great, let’s tighten this up around the CSV bits and a couple of small-but-critical fixes to your current file.

What needs fixing right now

  1. You call addqacsvfromlist() with no arguments. Provide payload, a qalist, and (optionally) intakeidx, csv_filename.

  2. getheaders() is defined to accept authparams but you call getheaders() with none. Pass AUTHPARAMS, and make get_headers return only headers (hard-error on failures to match your alerting style).

  3. CSV “Q/A list” source. You pasted a schema JSON that maps human-readable questions → field codes (e.g., "Injured Party's First Name:": "c-25864"). We need a helper to build qalist = [{"q": <question>, "a": <answer>}, ...] by pulling answers from obj.mapping (or obj.lrfields if that’s where the values live).

Below is a compact, drop-in set of helpers + minimal edits.

---

Drop-in helpers

import base64 import csv import io

def buildqalistfromschema(schemasection: dict[str, str], source: dict) -> list[dict[str, str]]: """ Turn a mapping of {Question Text -> fieldcode} into a list of {"q","a"} dicts by looking up answers in source (e.g., obj.mapping). Keeps the insertion order of schemasection (JSON preserves order in Py3.7+). """ qa = [] for question, fieldcode in schemasection.items(): answer = source.get(fieldcode, "") # stringify non-str answers (dates, numbers) to keep CSV simple if answer is None: answer = "" elif not isinstance(answer, str): answer = str(answer) qa.append({"q": question, "a": answer}) return qa

def ensureattachmentarray(payload: dict, intakeidx: int = 0) -> list: intakes = payload.setdefault("intakes", []) while len(intakes) <= intakeidx: intakes.append({}) return intakes[intake_idx].setdefault("AttachmentFiles", [])

def toattachment(name: str, rawbytes: bytes) -> dict: b64 = base64.b64encode(raw_bytes).decode("ascii") return {"name": name, "Attachment": b64}

def addqacsvfromlist(payload: dict, qalist: list[dict[str, str]], intakeidx: int = 0, csvfilename: str = "intakeqa.csv") -> None: """ Writes a 2-column CSV (Question, Answer) into AttachmentFiles for the given intake. """ buf = io.StringIO(newline="") writer = csv.writer(buf) writer.writerow(["Question", "Answer"]) for item in qalist: writer.writerow([item.get("q", "") or "", item.get("a", "") or ""]) csvbytes = buf.getvalue().encode("utf-8")

attachments = ensure_attachment_array(payload, intake_idx)
attachments.append(to_attachment(csv_filename, csv_bytes))

---

Auth header fix (hard-error style)

def getheaders(authparams: dict[str, str]) -> dict[str, str]: authurl = "https://login.salesforce.com/services/oauth2/token" authresponse = requests.post(authurl, data=authparams) # Hard fail if bad auth authresponse.raisefor_status()

access_token = auth_response.json().get('access_token')
if not access_token:
    raise ValueError("Failed to retrieve access token")

# NOTE: use form or JSON Content-Type to match what you POST later
return {
    'Authorization': f"Bearer {access_token}",
    'verification_token': 'ShieldLegal',
    # If you're sending JSON for Milberg:
    'Content-Type': 'application/json'
    # If you're sending form-encoded for Milberg, change to:
    # 'Content-Type': 'application/x-www-form-urlencoded'
}

---

Wiring it in your main

You need a payload object that has an intakes array if you plan to attach files (CSV/PDF). If you’re POSTing obj.mapping directly and that structure does not contain intakes[0].AttachmentFiles, attachments won’t go anywhere.

Two clean options:

Option A — Build a payload (recommended)

@functionsframework.http def main(request: Request): obj: IntegrationMappingObject = functionmapping(request) logger = logging.getLogger(f'{file}.{inspect.currentframe().fcode.coname}') casetype = obj.logrow['casetypeid'] postgres: IntegrationPostgres = obj.postgres

if not exists(f"{case_type}/mapping.json"):
    logger.warning('Case type %s not found', case_type)
    return ''

if exists(f"{case_type}/main.py"):
    importlib.import_module(f"{case_type}.main").main(obj)

# Build a payload scaffold that includes intakes[0]
payload = {
    # carry over anything you need at top-level:
    "test": obj.mapping.get("test", "TEST"),
    "parties": obj.mapping.get("parties", []),
    "intakes": [obj.mapping.get("intakes", [{}])[0] if obj.mapping.get("intakes") else {}]
}

# Build Q/A from your schema JSON (choose the section you want in CSV)
# Example: GENERAL_Q_SCHEMA is the dict under "General Case Questions"
GENERAL_Q_SCHEMA: dict[str, str] = YOUR_LOADED_SCHEMA["General Case Questions"]

qa_list = build_qa_list_from_schema(GENERAL_Q_SCHEMA, source=obj.mapping)
add_qa_csv_from_list(payload, qa_list, intake_idx=0, csv_filename="intake_questions.csv")

handle_Milberg(obj, payload)  # pass payload here
<a href="http://logger.info">logger.info</a>('Finished')
return ''

And adjust handle_Milberg to use payload:

def handleMilberg(obj: IntegrationMappingObject, payload: dict) -> str: headers = getheaders(AUTH_PARAMS) url = 'https://spglawfirm.my.salesforce.com/services/apexrest/Mia/'

# Choose JSON or form to match the endpoint
intake_response = <a href="http://requests.post">requests.post</a>(url, json=payload, headers=headers)
# If they truly require form-encoding, use `data=` and ensure the structure matches how their Apex REST parses it.

obj.postgres.update_response(obj.log_row['id'], f'{intake_response} {intake_response.text}')
case_id = intake_response.json()['intakes'][0]['caseType']  # adjust to their actual key!
obj.postgres.update_external_id(obj.log_row['id'], case_id)
return case_id

Option B — If you must keep posting obj.mapping

Then you must ensure obj.mapping contains an intakes array and you call:

qalist = buildqalistfromschema(GENERALQSCHEMA, source=obj.mapping) addqacsvfromlist(obj.mapping, qalist, intakeidx=0, csvfilename="intake_questions.csv")

…but confirm Milberg’s API accepts attachments under intakes[0].AttachmentFiles when sent as form-encoded. If they require JSON, switch to json=payload.

---

About your pasted schema JSON

You can keep it as a file (questions.json) inside the campaign folder and load it:

import json from pathlib import Path

def loadcampaignschema(casetype: str) -> dict: schemapath = Path(casetype) / "questions.json" with open(schemapath, "r", encoding="utf-8") as f: return json.load(f)

usage

schema = loadcampaignschema(casetype) GENERALQSCHEMA = schema["General Case Questions"] qalist = buildqalistfromschema(GENERALQSCHEMA, source=obj.mapping) addqacsvfromlist(payload, qa_list)

If answers live in obj.lrfields instead of obj.mapping, pass source=obj.lrfields.

---

Quick checklist

[ ] Pass AUTHPARAMS to getheaders(AUTH_PARAMS).

[ ] Decide JSON vs form for Milberg; set Content-Type accordingly.

[ ] Ensure the object you pass to addqacsvfromlist(...) has an intakes[0].

[ ] Build qa_list from your schema section + answer source dict.

[ ] Attach CSV via AttachmentFiles.

[ ] Adjust the case_id extraction path to the real response shape.

If you want, share a sample actual Milberg response body, and I’ll lock down the exact case_id extraction and whether we should switch to json= or leave it form-encoded.

Richard Schnitzler (rschnitzler@shield-legal.com)
2025-08-29 15:56:06
Richard Schnitzler (rschnitzler@shield-legal.com)
2025-08-29 15:59:16

Here’s a solid evaluation criteria list you can use as the foundation for your cost–benefit analysis presentation comparing Salesforce vs. other tools. I’ve grouped them into major categories so you can present it clearly in slides:

---

  1. Cost Considerations

Licensing fees (per-user, per-month vs enterprise package deals).

Implementation costs (consultants, developers, migration).

Customization & development costs (Apex, LWC, integrations vs out-of-box features).

Maintenance costs (admin overhead, ongoing support, vendor fees).

Infrastructure savings (cloud hosting included vs needing separate infrastructure).

Upgrade & release costs (Salesforce’s automatic updates vs vendor manual upgrades).

---

  1. Functionality & Fit

Core features (CRM, case management, marketing automation, analytics).

Industry specialization (legal, healthcare, nonprofit, etc.).

Customization flexibility (metadata-driven config, low-code tools).

Integration capabilities (APIs, middleware support, AppExchange ecosystem).

Mobile & offline capabilities.

Scalability (adding users, features, and regions without major re-architecture).

---

  1. User Experience

Ease of use (UI design, training needs).

Role-based functionality (exec dashboards vs analyst vs support staff).

Learning curve (admin & dev skill requirements).

Adoption rate potential (how likely staff are to actually use the tool).

---

  1. Data Management & Reporting

Data model flexibility (custom objects, fields, relationships).

Reporting & analytics (built-in dashboards, advanced analytics, AI).

Data security & compliance (HIPAA, GDPR, FedRAMP, SOC 2).

Data migration ease (from legacy systems, spreadsheets, or other CRMs).

---

  1. Risk & Reliability

Vendor stability & market share (long-term viability of provider).

Ecosystem & community support (availability of talent, partners, and forums).

System uptime & SLAs (guaranteed availability).

Change management risk (difficulty of switching or sunsetting).

---

  1. Strategic Value

Alignment with business goals (customer growth, efficiency, compliance).

Innovation potential (AI, automation, future-proofing).

Competitive advantage (tools competitors are using).

Total cost of ownership vs return on investment (ROI).

---

📊 My suggestion for your presentation:

First slide: introduce categories (Costs, Functionality, UX, Data, Risk, Strategic).

Then make a comparison table: Salesforce vs Tool A vs Tool B, rating each criterion (✓, partial, ✗ or a 1–5 scale).

Wrap with a weighted scorecard or ROI model to show the recommendation clearly.

---

Would you like me to draft the presentation structure (slides outline + table template) next so you can drop in your org’s details, or should we first assign weights/importance to each of these criteria to make it more quantitative?

Richard Schnitzler (rschnitzler@shield-legal.com)
2025-08-29 16:18:09

Great—let’s compare Salesforce (single-platform) vs a multi-tool stack (Leadspedia + LawRuler + Copper + “others”) with a tight focus on reporting complexity and cost (time & money).

---

At-a-glance: reporting complexity

Area Salesforce (single platform) Multi-tool (Leadspedia + LawRuler + Copper + others)

Native dashboards & reports Robust builder; cross-object & joined reports for unified views. Each tool has its own dashboards (Leadspedia analytics; LawRuler KPI dashboards; Copper reports). Cross-tool reporting requires exports or BI. Cross-system reporting Often possible in-app (joined reports, custom summary formulas). External BI used mainly for very advanced scenarios. Requires a BI layer (e.g., Power BI / Looker Studio / Looker) + scheduled extracts or APIs to stitch data from Leadspedia, LawRuler, Copper. Data freshness Near-real-time inside Salesforce; fewer hops. Typically hourly/daily refresh windows unless building API pipelines; more points of failure across tools. (Common practice with BI refresh & connector schedules.) Effort to build a new cross-funnel report Hours–days (admin builds report/dashboards; joined reports for multi-object). Days–weeks (define entities/IDs across tools, build ETL, model in BI, QA joins). BI/ETL skills required. Maintenance surface area Lower (one data model, one release cycle). Higher (multiple schemas, API changes, connector limits, auth/permissions across tools). Typical “workarounds” Joined reports; occasional export → Sheets for edge cases. Frequent exports to Sheets/CSV; Copper itself promotes Sheets-based custom reporting.

---

Money: current list-price signals (Aug 2025)

Salesforce Sales Cloud: Enterprise $165/user/mo, Unlimited $330/user/mo; Salesforce announced price increases effective Aug 1, 2025 (so budget with headroom).

Leadspedia: $1,500–$2,500/mo listed tiers; Enterprise is quoted.

LawRuler (legal CRM): public page shows plan tiers with onboarding fee and “includes up to 3 users”; exact monthly/user pricing is typically quote-based.

Copper CRM: tiers with public pricing widely reported (e.g., Starter/Basic/Professional/Business), Business ~$134/user/mo; Pro ~$69/user/mo (range varies by billing).

BI needed for multi-tool (typical picks & current pricing signals):

Power BI Pro $14/user/mo (price increased Apr 1, 2025).

Looker Studio Pro often cited as ~$9 per user per project / month (team/admin features; details vary).

Looker (core): enterprise pricing is custom; industry trackers peg ~$150k/yr average but highly variable.

---

Time: recurring effort you should expect

Task Salesforce Multi-tool stack

Monthly reporting prep (exports, stitching, QA) Low—mostly inside SF reports/dashboards. Medium–High—exports or API pulls from Leadspedia/LawRuler/Copper, model in BI, reconcile IDs. Break/fix (auth, schema, connector changes) Low–Medium. Medium–High (several vendors, more moving parts). New KPI across funnel (Marketing → Intake → CRM) Build once in SF (objects/fields + report). Build integration + transformations + BI model; repeat when a tool changes.

(The right column is where teams often burn hidden hours.)

---

Illustrative 12-month TCO (example to size the order of magnitude)

> Assumptions (adjust these to your user counts): 25 CRM users; Copper “Pro” as a like-for-like tier; 10 people need BI author/view rights for cross-tool dashboards. LawRuler is vendor-quoted, so left as TBD.

Salesforce (Enterprise): 25 users × $165 = $4,125/mo → $49,500/yr (list).

Multi-tool licenses (partial):

Leadspedia: $1,500–$2,500/mo → $18,000–$30,000/yr.

Copper “Pro” example: 25 × $69 = $1,725/mo → $20,700/yr.

LawRuler: TBD (quote + onboarding fee).

BI (Power BI Pro for 10 users): 10 × $14 = $140/mo → $1,680/yr.

Subtotal (excluding LawRuler): $3,365–$4,365/mo → $40,080–$52,380/yr.

Key takeaway: even before adding LawRuler seats and any ETL/connectors, the multi-tool license spend is in the same ballpark as Salesforce Enterprise. The hidden cost is the time and risk of maintaining the reporting pipelines across vendors.

---

Reporting capability reality check (by product)

Leadspedia – strong for performance-marketing: real-time analytics on lead distribution, conversion, campaign ROI; great at its lane, but doesn’t natively join across your legal intake/CRM data.

LawRuler – legal-specific dashboards & marketing/ROI KPIs; good for intake/case metrics inside LawRuler, but cross-tool views need BI.

Copper – clean built-in reporting; Copper promotes Sheets exports and custom report builder for deeper/custom analytics, which is another signal you’ll be living in spreadsheets or BI when you go cross-stack.

Salesforce – mature joined reports and cross-object summaries for one-place visibility; you can still push to BI for advanced modeling, but many orgs keep most exec/ops reporting inside Salesforce.

---

Recommendation for your deck

  1. Scorecard (weighted to your priorities). Example weights: Reporting (50%), Time to insight (20%), License $ (15%), Maintenance overhead (15%). Give Salesforce a lower “complexity” score and show where multi-tool loses on cross-funnel KPIs.

  2. Cost slide (with ranges + citations). Use the figures above; call out Power BI price hike (Apr 1, 2025) as a consideration if you rely on BI to stitch systems.

  3. Risk slide. List failure points in multi-tool reporting: API auth, schema drift, connector limits, refresh delays, identity resolution mismatches.

  4. Path forward. Either (A) Salesforce-centric: consolidate intake + marketing signals into Salesforce and report there; or (B) Formal BI architecture: keep the stack but budget for BI + warehouse + integration time.

---

If you want, I can turn this into a slide outline + a fill-in calculator (you plug in user counts and it auto-totals Salesforce vs multi-tool TCO), and tailor the scorecard weights to your org.

Richard Schnitzler (rschnitzler@shield-legal.com)
2025-08-29 16:33:06

Give me a copy paste safe version of the time breakdown please

Richard Schnitzler (rschnitzler@shield-legal.com)
2025-08-29 16:35:13

Here’s a PowerPoint/Google Slides–ready outline for the Time Breakdown section. Each “slide” is structured so you can copy/paste directly into your deck:

---

Slide Title: Time Breakdown – Monthly Reporting Prep

Salesforce (single platform):

Low effort – handled with built-in reports and dashboards

Minimal exports needed

Multi-tool stack (Leadspedia + LawRuler + Copper + others):

Medium–High effort – requires exports or API pulls

Data must be modeled in BI tool (Power BI, Looker, etc.)

Manual reconciliation of IDs across systems

---

Slide Title: Time Breakdown – Break/Fix Maintenance

Salesforce (single platform):

Low–Medium effort

Occasional schema updates or admin fixes

One vendor, one release cycle

Multi-tool stack:

Medium–High effort

Multiple vendors → more moving parts

API/authentication issues and schema drift create ongoing overhead

---

Slide Title: Time Breakdown – New KPI Across Funnel

(Marketing → Intake → CRM)

Salesforce (single platform):

Hours–days of effort

Add fields/objects and update report/dashboard inside Salesforce

Minimal dependencies

Multi-tool stack:

Days–weeks of effort

Build integrations, transformations, and BI models

Repeat effort whenever a tool changes its schema or API

---

👉 This way you have three slides: one per row of the time table, with bullets you can expand visually.

Do you want me to also bundle these into a single-slide summary graphic (all three time comparisons side-by-side), so you can pick between “deep dive” vs “executive summary” formats?