Parchment

Automating Transcript Collection and Processing with Parchment API and ProcessMaker TCE

Contact:

Please work directly with your assigned Account Executive and Solutions Consultant.

Released/Updated:

2025-05-15

Introduction

Modern institutions handle thousands of transcript exchanges each year. The manual processing of incoming student transcripts is labor-intensive, prone to delays, and susceptible to errors. By integrating Parchment’s transcript delivery service with ProcessMaker’s Transfer Credit Evaluation (TCE) platform, universities can achieve a fully automated transcript workflow – from secure collection via Parchment’s API to intelligent data extraction and import into a Student Information System (SIS).

This white paper outlines how IT teams can leverage the publicly available features of the Parchment API and ProcessMaker TCE to streamline transcript processing. We will cover Parchment’s transcript delivery process, API authentication and retrieval methods, the configuration of an automated workflow in ProcessMaker TCE (including a sample workflow), example API calls and scripts, and critical security considerations. The goal is to provide a clear blueprint for institutions who are looking to modernize and automate their transcript handling and transfer credit evaluation process.


Overview of Parchment’s Transcript Delivery Process

Parchment is a leading platform for the secure exchange of academic credentials, enabling schools and universities to send and receive official transcripts electronically.

The transcript delivery process typically works as follows:

  • Student Request: A student or alumnus requests an official transcript via Parchment (or a connected portal). Parchment coordinates with the sending institution to prepare the transcript (often in PDF or data format) and delivers it to the designated receiving institution’s Parchment account.

  • Parchment Send and Receive: Parchment’s Send service manages outgoing transcript orders from source institutions, while Parchment Receive handles incoming transcripts for destinations. Receiving institutions are notified when transcripts arrive in their Parchment “Inbox.” Administrators can log into Parchment’s secure web interface to download transcripts and related documents.

  • Delivery Formats: Parchment supports multiple transcript formats to accommodate various SIS integrations. Transcripts can be delivered as PDF or TIFF documents (digital copies of the transcript). They can also be delivered as structured data files, following standards like PESC XML or SPEEDE EDI. If the sending institution provides a data-enabled eTranscript, Parchment can pass along an XML/EDI transcript, which contains structured student, course, and grade data. This flexibility allows institutions to either process transcripts as documents or ingest them directly as data for automation.

  • Delivery Methods: For automation, Parchment offers an “auto-delivery” option for Premium Receive members. Instead of manual download, transcripts can be delivered directly to the institution’s systems via secure SFTP or web services. In practice, this means new transcript files (and an accompanying index file) can be pushed to an FTP/SFTP server or retrieved via an API, eliminating the need for staff to log into the Parchment interface. Parchment’s auto-delivery mechanism ensures transcripts flow into your campus IT environment in near real-time once they are available.

  • Index Files: When using Parchment’s Premium Receive service with automation, each batch of delivered transcripts comes with an index file (sometimes called a manifest or “packing slip”). This index is an XML or CSV file listing metadata for each transcript, such as the student’s name, sending institution, transcript file name, delivery timestamp, etc.

    The index file is critical for integrating with other systems: it allows the receiving system (in our case, ProcessMaker or the SIS) to identify and match transcripts to student records using identifying information (for example, application or student IDs included in the index). Parchment’s own Slate integration, for instance, uses such an index file to automatically match incoming transcripts to student applications. With these pieces (transcript files + index data), an institution has all the information needed to process transcripts without manual intervention.

In summary, Parchment provides a secure, end-to-end transcript delivery pipeline in which:

  • Students order transcripts

  • Parchment delivers them electronically to the institution in the preferred format

  • (With Premium services) the files and metadata can be automatically handed off to campus systems


Authenticating with the Parchment API

To integrate programmatically with Parchment and retrieve transcripts automatically, IT administrators must use Parchment’s API (or auto-delivery mechanism). Authentication is required to ensure only authorized systems can access sensitive student records.

While Parchment’s detailed API documentation is not publicly available, we can infer from industry standards and Parchment’s integration guides to understand how the authentication process works:

  • API Credentials: Upon subscribing to Parchment’s Receive Premium service (or upon arrangement with Parchment’s support), your institution will receive API credentials. These might be in the form of a username and password for basic HTTP authentication, an API key/token, or an OAuth client ID/secret for token-based authentication. Ensure these credentials are stored securely (in encrypted configuration files or secure credential vaults) and not hard-coded in scripts.

  • Authentication Method: If using a RESTful API, authentication could involve obtaining a JSON Web Token (JWT) or OAuth 2.0 token. For example, you might first call an authentication endpoint (e.g. POST /api/v1/authenticate) with your credentials to receive a token. This token would then be used in the Authorization header (e.g. Authorization: Bearer <token>). In other cases, Parchment could allow HTTP basic auth on each request using your provided credentials over HTTPS.

  • Connection Security: All API calls to Parchment are made over TLS-encrypted HTTPS endpoints or via Secure FTP (SFTP). Before connecting, your network may need to allow outbound calls to Parchment’s API servers or SFTP host. If using IP whitelisting, get the appropriate endpoint addresses from Parchment.

  • Testing and Endpoints: Parchment likely provides a test environment or sandbox. The base URL for Parchment’s API could be something like https://api.parchment.com/. Your Parchment integration specialist will supply the exact endpoints.

  • InCommon SSO (Optional): Parchment supports single sign-on for users, but for API access to transcript data, SSO is typically not used. A service account with direct API auth is preferred.


Example – Authenticating via Basic Auth

import requests
from requests.auth import HTTPBasicAuth

api_url = "https://api.parchment.com/receive/v1/transcripts"
username = "<Parchment API username>"
password = "<Parchment API password>"

response = requests.get(api_url, auth=HTTPBasicAuth(username, password))

if response.status_code == 200:
    data = response.json()
    print("Connected to Parchment API. Available transcripts count:", len(data.get("transcripts", [])))
else:
    print("Authentication failed, status code:", response.status_code, response.text)

In the above example, the script uses an HTTP GET request with basic auth. In practice, the endpoint and returned data structure ("transcripts") would be defined by Parchment’s API specification. If token-based auth were required, the first step would be a requests.post() to a login endpoint to retrieve a token, then subsequent calls pass headers={"Authorization": "Bearer <token>"}.


Querying the Parchment API for Transcripts

Once authenticated, the integration can query Parchment for new transcripts and download them. There are two primary approaches:

1. Polling for New Transcripts

Your integration can regularly (e.g., every 5 minutes) call an endpoint to get a list of transcripts that have been delivered to your Parchment inbox and are ready for download. This could be an endpoint like GET /receive/v1/transcripts?status=ready or a general feed of incoming documents.

The response would likely include metadata for each available transcript – such as:

  • Transcript ID

  • Student name/ID

  • Sending institution

  • Document type

  • A URL or identifier to download the actual file

Using the metadata (or the separate index file if Parchment provides it via the API), your script can match transcripts to student records or at least log which ones have been received.

2. Webhooks or Push (if supported)

Rather than polling, Parchment might offer to push notifications when transcripts arrive – for example, calling a webhook on your system or dropping files to an SFTP server.

In the Parchment + Slate integration, Parchment automatically zips the transcript files along with an index file and sends them to a Slate-managed SFTP server. If a webhook were available, Parchment’s system would call your endpoint (with proper authentication) and provide the new transcript info. Given publicly available info, the most common method is polling or SFTP delivery, so this guide will focus on polling/download.


Retrieving Transcript Files

After identifying which transcripts are available, the next step is to download each transcript file (e.g., the PDF) and its associated index data.

This can be done in one of two ways:

  • Single-step (direct content in list): The list API might directly include the transcript content (e.g. base64-encoded PDF data or a direct download URL for each item). For instance, the JSON for each transcript might have a field like downloadUrl which your script can call to get the PDF file bytes.

  • Two-step (list then fetch): Alternatively, the API may require a separate request for each transcript file. For example, the list response gives transcript IDs, then you call GET /receive/v1/transcripts/{id} or GET /receive/v1/transcripts/{id}/file to retrieve the actual document.

In either case, transcripts are typically delivered as PDF files (which may be digitally signed to certify authenticity). If Parchment provides data-format transcripts (XML/EDI), those might be downloadable as .xml or .edi files, or even embedded within a PDF as an attachment. The integration should handle both possibilities:

  • PDF Transcripts: Save the PDF to a secure location or directly pass it to ProcessMaker for analysis. Keep the original filename if provided (often something like StudentName_Transcript.pdf or an order number) for traceability.

  • Data Transcripts (XML/EDI): These can be parsed directly. For example, a PESC XML transcript can be read by an XML parser to extract student, course, and grade information in a structured way, potentially bypassing the need for OCR. (Note: ProcessMaker TCE can likely ingest these directly into the workflow as structured input, or a custom script can map them to the needed format for the SIS).


Example – Polling and Downloading Transcripts (Python)

import requests
from requests.auth import HTTPBasicAuth

base_url = "https://api.parchment.com/receive/v1"
auth = HTTPBasicAuth(username, password)

# 1. List new transcripts
resp = requests.get(f"{base_url}/transcripts?status=ready", auth=auth)
transcripts = resp.json().get("transcripts", [])

for tx in transcripts:
  tx_id = tx["id"]
  student = tx.get("studentName") or tx.get("studentId")
  
  # 2. Download the transcript PDF
  pdf_resp = requests.get(f"{base_url}/transcripts/{tx_id}/file", auth=auth)
  if pdf_resp.status_code == 200:
      filename = f"{student}_{tx_id}.pdf"
      with open(filename, "wb") as f:
          f.write(pdf_resp.content)
      print(f"Downloaded {filename}")
  
  # 3. Acknowledge receipt (if required)
  requests.post(f"{base_url}/transcripts/{tx_id}/acknowledge", auth=auth)

In this hypothetical script, we:

  1. GET a list of ready transcripts

  2. Iterate through each

  3. GET the actual PDF content, saving it to a file

  4. Optionally notify Parchment that we have successfully retrieved it (some systems require an ACK so they don’t send it again).

The specifics (endpoints, JSON fields, ack mechanism) would come from Parchment’s API documentation or support. The key point is that the integration can programmatically fetch all new transcript files and their metadata on a schedule, preparing them for the next stage: processing with ProcessMaker. Including the Index Data: If Parchment provides a separate index file (which might include multiple transcripts in one batch), the API could offer an endpoint like GET /receive/v1/index/{batchId} to download the index (perhaps as an XML). This index would contain student identifiers (e.g. application number or email) that can be used to match the transcript to a student in the SIS or admissions CRM. In our example, we assumed the list API already gave us enough info (studentName or ID). In a real-case scenario, be sure to utilize the index data for accurate matching. ProcessMaker can parse an index file if needed or simply receive the necessary identifiers via API calls. With transcripts now retrieved and stored (or held in memory), the integration shifts to feeding these transcripts into ProcessMaker TCE for automated processing.


Integrating Parchment API with ProcessMaker TCE for Automated Processing

ProcessMaker TCE (Transfer Credit Evaluation) is a specialized workflow solution designed to ingest student transcripts and automate the transfer credit evaluation process. Once transcripts are collected via Parchment, ProcessMaker TCE takes over to extract the data, evaluate it, and prepare it for import into the SIS. The integration between Parchment and ProcessMaker can be architected in a few ways

The following describes a typical integration pipeline.

1. Triggering the Workflow

When a new transcript file is downloaded (via the script or process described above), it should initiate a ProcessMaker workflow instance for that student/transcript. This trigger could be implemented by using ProcessMaker’s REST API to start a new case (workflow) and upload the transcript file into it, or by placing the file into a designated “hot folder” that ProcessMaker monitors.

For example, ProcessMaker’s API might have an endpoint to start a process by key and attach a file as input. The transcript’s metadata (student name, ID, etc.) can be passed as variables to the process at the start. Each transcript (or each student’s batch of transcripts) will result in a new workflow case in TCE.

2. Ingestion into TCE’s IDP Engine

Once the workflow starts with the transcript attached, ProcessMaker’s Intelligent Document Processing (IDP) engine kicks in.

ProcessMaker TCE uses advanced document processing to scan and analyze transcript files, automatically extracting key data points. In essence, the platform “reads” the transcript like a human would, but in a fraction of the time. According to ProcessMaker, the platform “scans, digitizes, analyzes, and extracts information from an uploaded transcript.”

This goes beyond basic OCR; it uses a combination of OCR and intelligent algorithms (machine learning models trained on transcript layouts) to identify elements such as the student’s name, sending institution, courses taken, grades, credits, and dates. By combining text recognition with domain-specific logic, the system can accurately convert an unstructured transcript PDF into structured data ready for evaluation.

3. Automated Data Extraction

The transcript data extraction step is typically configured as a task in the workflow. ProcessMaker TCE includes pre-built extraction templates or machine learning models for common transcript formats.

  • If the transcripts are in PDF, the system will parse the text.

  • If the PDFs are scanned images, an OCR step will convert them to text.

  • If transcripts came as XML from Parchment, this step would recognize the structured file and map the fields accordingly (with much less effort, since the data is already structured).

After extraction, the system will have a data structure (in memory or as a document) that includes all courses and grades from the transcript, the institution name, and the student’s details. This data is now used for the transfer credit evaluation.

4. Automated Course Matching and Evaluation

A core value of ProcessMaker TCE is that it can automatically compare the extracted courses against the receiving institution’s course catalog and articulation rules.

Typically, before running the workflows, the university will have loaded its equivalency database into TCE – for example, a list of known courses from various institutions and their equivalent course at the home institution (or a table of how credits transfer).

Using this, the workflow can automatically match incoming courses to existing equivalencies. If a course from the transcript exactly matches an entry in the equivalency database (same course code or recognized via some heuristic), TCE will assign the pre-determined equivalent course and credit value.

According to documentation, after a transcript is processed, the system will automatically match courses from the incoming transcript with the university’s course catalog and establish equivalencies.

This means that for many common transfer courses, no human input is needed – the system identifies, for example, that ENG 101 from the sending college is equivalent to ENGL 1001 at the receiving university and flags it as such.

jsonCopyEdit"ENG 101" @ ABC Community College → ENGL 1001 @ Your University

5. Workflow for Exceptions (Unmatched Courses)

Not all courses will automatically match. Any course that the system cannot find in the equivalency mappings or that needs review is flagged for human intervention.

ProcessMaker TCE can route these unmatched courses to designated staff (transfer credit evaluators or department heads) for review.

For instance, if a transcript contains a course HIST 300: Special Topics that isn’t in the database, the workflow can create a task for an admissions officer to decide how to treat it – maybe by assigning a generic history elective credit or requesting more info.

The TCE workflow may include steps like:

  • “Route Unmatched Courses”

  • “Match Courses”

as seen in the product’s documentation.

During these steps, an admissions staff member can see a list of the extracted courses that need attention and manually choose the appropriate equivalency or mark it as non-transferable.

The system interface might provide:

  • Dropdowns of possible equivalents

  • An option to consult faculty for further evaluation

This blends automation with human decision-making for a complete solution.

6. Review and Confirmation

Before finalizing, the workflow can present a summary of all extracted courses and the proposed transfer credits. Staff can review the transcript data alongside the matches.

ProcessMaker TCE’s interface allows admissions staff to verify that the OCR/extraction was accurate (correct course codes, grades, credits) and make any inline corrections if necessary. They confirm the sending institution’s details (ensuring the transcript is indeed from an accredited source, etc.) and then approve the evaluation.

7. Data Output and SIS Import

After approval, the final step is to export the evaluated transfer credit data into the SIS. There are multiple ways this can happen:

  • Automated SIS API update: If the SIS (such as Banner, PeopleSoft, Ellucian Colleague, etc.) provides web service endpoints for inserting transfer credit records, the ProcessMaker workflow can invoke those APIs. For example, many modern SIS have REST APIs or SOAP web services. The workflow can call an API like POST /api/students/{id}/transferCredits with a JSON payload of courses and credits. This can be done using a script task or connector within ProcessMaker that issues an HTTPS request (with appropriate authentication to the SIS). The SIS would then record the transferred courses on the student’s academic history.

  • File Export: Alternatively, ProcessMaker could generate a structured file (CSV, XML) of the transfer credit information, which the SIS can import using its batch import tools. Some institutions prefer a nightly import of evaluated credits. In this case, the workflow might write an entry to a transfer credit import table in a staging database or drop a file that the SIS regularly ingests.

  • Direct Database Update: In some cases, if direct DB access is allowed (usually not recommended due to maintainability), the ProcessMaker process could execute a database procedure to insert credits. However, leveraging official SIS integration points is safer and more supported.

8. Completion and Notification

Finally, the workflow can mark itself complete. It may notify relevant parties – for instance, send an email to the admissions office that “Transcript for John Doe has been processed and imported to SIS,” or even notify the student (perhaps via an applicant portal or email) that their transfer credits have been evaluated. All transcript files and data can be archived within ProcessMaker or a document management system for audit purposes.


Throughout this integrated process, the goal is a touch-free or low-touch workflow. From the moment a transcript is delivered by Parchment to the moment it’s recorded in the SIS, minimal human intervention is needed, except in cases of exceptions or verification. This greatly accelerates admissions decisions and transfer credit evaluations, as noted by institutions who have adopted similar automation.

In the next section, we’ll detail a sample configuration of such a workflow in ProcessMaker TCE, breaking down the steps and components.


Sample Transcript Processing Workflow in ProcessMaker TCE

As an example of how the integration works, consider the following sample workflow configuration for automated transcript processing. This workflow can be designed using ProcessMaker’s workflow builder (which supports BPMN 2.0 notation for defining tasks, gateways, etc.). Each step corresponds to the stages outlined earlier:

1. Start Event – “New Transcript Received”

The process is triggered when a transcript is received from Parchment. In configuration, this could be a message event that is initiated via API call. For example, the custom script that downloads transcripts could use ProcessMaker’s REST API to start the process and attach the transcript file (and any metadata like student ID). In the process diagram, this is the start point of the case.

2. Script Task – “Import Transcript via API”

This automated task handles pulling in the transcript file if it wasn’t already attached at start. If we choose to let ProcessMaker itself fetch the transcript (instead of an external script), we could create a script task here that calls the Parchment API. Using ProcessMaker’s connector or scripting capabilities (JavaScript or PHP within a process context), the task would perform the HTTP GET to Parchment (similar to the earlier code example) and retrieve the PDF. The result is then stored in a process data object or file variable. (In many cases, however, the file will already be present because the trigger was an external event that passed it in. So this step is optional or illustrative – some implementations might skip external scripting and rely on a scheduled ProcessMaker task to fetch files.)

3. Service Task – “Extract Transcript Data (IDP)”

This is the crucial IDP step where the transcript file is processed. ProcessMaker TCE likely provides a pre-built document extraction service task that can be added to the workflow. You configure this task by specifying the input document (the transcript PDF) and the output data fields you want.

Under the hood, this task invokes the intelligent document processing engine which uses OCR and parsing rules to pull out transcript details. No manual step is needed here – it’s fully automated.

On completion, the task outputs structured data such as a list of courses (each with fields like course code, title, grade, credits, term, etc.), the student’s info, and the sending institution name. These outputs populate process variables or a data collection in ProcessMaker.

4. User Task – “Review Extracted Data”

Even the best OCR/IDP may occasionally misread data (for instance, a smudge on a scan could confuse a character). Thus, a common practice is to include a human validation step.

In this user task, an admissions staff member is presented with the results of the extraction in a readable format (often via a form or interface within ProcessMaker). They can see each course that was parsed and the associated grade and credits. They can cross-reference with the transcript image if needed (ProcessMaker can show the original PDF side-by-side).

The staffer confirms that the data was extracted correctly. If there are errors (maybe a course name didn’t parse correctly or a grade looks wrong), the staff can correct those fields in the form. This step ensures data quality before automation continues.

In cases where the extraction was highly confident and the institution trusts the automation, this step could be bypassed or only sampled; however, early in adoption it’s wise to include a review to build confidence in the system’s accuracy.

5. Script Task – “Automated Course Matching”

Here the process takes the extracted courses and attempts to match them with the institution’s course equivalency database. This could be implemented as a script or as a rules task.

ProcessMaker might allow integration with a database or an internal lookup table loaded from CSV (as indicated by TCE’s data configuration step where admins upload lists of course equivalencies).

The script goes through each extracted course and checks if the combination of “sending institution + course code (or title)” exists in the equivalency table.

For those that match, it records the equivalent course at the home institution and the credit value. For those that don’t match, it flags them.

The outcome is two sets of data:

  • matched courses with their decided equivalencies

  • unmatched courses requiring review

6. Exclusive Gateway – “All Courses Matched?”

A decision point splits the flow:

  • If there are any unmatched courses, the process will take the path to handle them manually

  • If everything matches up perfectly, it can skip ahead

7. User Task – “Resolve Unmatched Courses” (Conditional)

On the unmatched branch, this task assigns the list of unmatched courses to a transfer credit evaluator (or possibly routes to different users based on subject area). The user is presented with each course that needs attention.

They may have a UI to pick an equivalency (e.g., search for a similar course in the catalog) or mark a decision (e.g., “Counts as elective credit”, “No credit”).

They might also gather input from faculty – ProcessMaker could facilitate sending the course details to a faculty reviewer via a sub-process or sending a notification (the documentation suggests an option to request departmental review or even notify the student if a course cannot be transferred).

For our purposes, this step ends with the evaluator entering the appropriate outcome for each unmatched course.

Once done, now all courses have some resolution (either matched by automation or manually).

8. User Task – “Approve Transfer Credit Evaluation”

This is a final verification/approval step by a supervisor or registrar. It’s always good practice that before data goes into the official SIS record, a responsible official gives a sign-off. This task shows the complete evaluated transfer credit summary:

  • student info

  • sending school

  • list of original courses with their grades

  • the decided equivalencies (including any credit awarded)

The registrar can approve or, if something looks off, send it back for re-review (ProcessMaker supports looping or sending the case back to previous steps if needed). Assuming approval, the case moves forward.

9. Service/Script Task – “Import into SIS”

In this automated step, the system now pushes the finalized data to the SIS. There are a few ways to configure this:

  • Connector Integration: If ProcessMaker has a connector for your SIS (some BPM platforms have out-of-the-box connectors for common systems or an ESB integration), you could configure that here.

  • Custom Script Integration: More generally, a script task can execute custom code to perform the import. For example, a script in this task could format the transfer credit data as required and call a REST API on the SIS. Pseudocode for a PeopleSoft integration might look like:

let sisApiUrl = "https://university.edu/ps/sis/api/transferCredits";
let payload = {
  studentId: process_vars.studentId,
  credits: process_vars.equivalencies // list of courses and credits to import
};
let sisResponse = http.post(sisApiUrl, payload, { auth: "Bearer " + SIS_TOKEN });
if (sisResponse.status != 200) {
  throw new Error("SIS import failed: " + sisResponse.message);
}

Of course, the actual implementation will depend on the SIS. Some SIS require calling stored procedures or queuing the data. Another approach is to have the process drop the data into an intermediate table and then call an SIS job (maybe via an API or message queue) to pick it up. Regardless of method, this task is where the integration writes the evaluated credit data into the student’s record.

10. End Event – “Process Complete”

The workflow ends. At this point:

  • the transcript has been stored (possibly archived),

  • the extracted data and evaluation decisions are saved,

  • and the SIS is updated.

The case remains in ProcessMaker for audit/history. Notifications can be sent (e.g., email to the student: "Your transcript from X University has been evaluated and Y credits were accepted.").


Workflow Summary: The above configuration ensures that from the moment a transcript is retrieved via the Parchment API, it is processed through a standardized pipeline:

Automated retrieval -> data extraction -> auto-matching -> manual exception
handling -> SIS update.

This design minimizes manual data entry (no one is keying in course codes and grades – the system captured that automatically). It also provides oversight where necessary (staff review steps). Each step is logged, creating an audit trail of who did what and when – which is useful for compliance and troubleshooting. Institutions can adjust the workflow (for example, adding a plagiarism check, or integrating a notification to students) as needed thanks to the flexibility of the BPM platform.

With the workflow in place, we now provide some concrete examples of the integration pieces – specifically, example API requests and script snippets that might be used in this solution.


Example API Requests and Workflow Automation Scripts

To solidify the concepts, this section presents a few example requests and code snippets relevant to the Parchment–ProcessMaker integration. These examples are illustrative (using hypothetical endpoints and data) but demonstrate the tasks an IT team would implement.

1. Retrieving a Transcript via Parchment API (HTTP Request)

Below is an example curl command that an integration service might use to fetch a transcript file from Parchment:

# Example: Get a specific transcript PDF by ID from Parchment API
curl -u "<API_USER>:<API_PASS>" -H "Accept: application/pdf" \
"https://api.parchment.com/receive/v1/transcripts/123456/file" \
-o "Transcript_123456.pdf"

In this command:

  • We use the -u flag to provide HTTP Basic auth credentials (<API_USER> and <API_PASS> would be replaced with the actual API account).

  • We set Accept: application/pdf to indicate we expect a PDF in response.

  • The URL /transcripts/123456/file is a placeholder for the endpoint to download the transcript with ID 123456.

  • The -o option saves the output to a file named Transcript_123456.pdf.

After running this, the PDF of the transcript would be saved locally. A similar approach could be used within a Python/Java/PHP script using their respective HTTP libraries.

2. Starting a ProcessMaker Case via API (HTTP Request)

If using an external scheduler or custom service to trigger ProcessMaker, one can call the ProcessMaker REST API to start the workflow when a new transcript is downloaded. For example:

# Example: Start a ProcessMaker case for Transfer Credit Evaluation
curl -X POST "https://processmaker.yourschool.edu/api/1.0/workflow/run-case" \
  -H "Authorization: Bearer <PM_API_TOKEN>" \
  -F "process_id=<TCE_PROCESS_ID>" \
  -F "studentId=98765" \
  -F "transcriptFile=@Transcript_123456.pdf"

Here:

  • We POST to a (hypothetical) endpoint /workflow/run-case with an API token for authorization.

  • We send multipart form data including:

    • process_id – identifying the Transfer Credit Evaluation process definition on the ProcessMaker server,

    • a form field studentId with the student’s ID (98765 in this example),

    • and attach the file Transcript_123456.pdf (the @ syntax in curl attaches a file).

  • This would initiate a new case of the workflow with the file and student ID as input data.

  • The workflow then proceeds as configured (performing the extraction, etc.).

3. Script Snippet – Parsing PESC XML Transcript

In cases where Parchment provides an XML transcript (PESC standard), a script can be used to parse it. For example, using Python’s built-in XML library:

from xml.etree import ElementTree as ET

xml_content = open("transcript.xml", "r").read()
root = ET.fromstring(xml_content)

# Assuming PESC XML structure, find student name and courses
student_name = root.findtext('.//PersonName/FormattedName')
courses = []

for course in root.findall('.//Course'):
    course_title = course.findtext('CourseTitle')
    grade = course.findtext('CourseGrade')
    credits = course.findtext('CourseCreditEarned')
    courses.append({
        "title": course_title,
        "grade": grade,
        "credits": credits
    })

print(f"Parsed transcript for {student_name}: {len(courses)} courses found.")

This snippet reads an XML file, finds the student’s name and iterates through each <Course> element to extract title, grade, and credits. This is roughly how ProcessMaker’s extraction might work if given an XML – though in TCE, much of this is likely handled by built-in logic once the XML is mapped.

4. Script Snippet – Sending Evaluated Credits to SIS (e.g., via REST API)

After ProcessMaker has determined which courses transfer and to what equivalents, the final script might send this data to the SIS. Here’s a pseudocode example using Python’s requests to send data to a SIS API:

import requests

sis_api_url = "https://sis.university.edu/api/transfer-credits"
sis_token = "<SIS_API_TOKEN>"  # obtained from SIS authentication

# Example data prepared from ProcessMaker output
transfer_data = {
    "studentId": 98765,
    "transferCredits": [
        {
            "externalCourse": "ENG 101",
            "externalInstitution": "ABC Community College",
            "grade": "A",
            "creditsEarned": 3,
            "equivalentCourse": "ENGL 1101",
            "equivalentCredits": 3
        },
        {
            "externalCourse": "MATH 210",
            "externalInstitution": "ABC Community College",
            "grade": "B+",
            "creditsEarned": 4,
            "equivalentCourse": "MATH 2200",
            "equivalentCredits": 4
        }
    ]
}

resp = requests.post(
    sis_api_url,
    json=transfer_data,
    headers={"Authorization": f"Bearer {sis_token}"}
)

if resp.status_code == 200:
    print("SIS updated successfully for student", transfer_data["studentId"])
else:
    print("SIS update failed:", resp.status_code, resp.text)

In this example, we prepare a JSON payload with the student’s ID and a list of transfer credits (each including the original course and the decided equivalent course at the university). We then POST this to the SIS endpoint with an authorization token. A real SIS might have a different schema or require separate calls per course; adjust accordingly.

The key is that this step is automated – no registrar is manually typing these courses into the SIS; the integration handles it.

5. ProcessMaker Script Task – Automated Course Matching (JavaScript-style pseudocode)

To demonstrate how a script within the ProcessMaker workflow might look (combining several steps internally), consider this pseudo-code that could be part of the “Automated Course Matching” task within ProcessMaker TCE:

// Pseudocode for a ProcessMaker script task
// Assume `extractedCourses` is an array of courses from OCR, and
`equivalencyTable` is loaded from a CSV or DB.
let matched = [];
let unmatched = [];

for (let course of extractedCourses) {
  let key = course.institution + '|' + course.code;
  
  if (equivalencyTable[key]) {
    matched.push({
      originalCourse: course.code,
      originalTitle: course.title,
      originalCredits: course.credits,
      grade: course.grade,
      equivalentCourse: equivalencyTable[key].ourCourse,
      equivalentCredits: equivalencyTable[key].ourCredits
    });
  } else {
    unmatched.push({
      originalCourse: course.code,
      originalTitle: course.title,
      originalCredits: course.credits,
      grade: course.grade
    });
  }
}

// Store results as process variables for next steps
process_vars.matchedCourses = matched;
process_vars.unmatchedCourses = unmatched;

In this example:

  • We loop through each extracted course (already parsed from the transcript).

  • We create a lookup key using the sending institution and course code.

  • If there’s a match in our equivalencyTable, we record the equivalency.

  • Otherwise, we store the course as unmatched to be handled manually.

  • Finally, we store both lists in process variables so the next task in the workflow (e.g., rendering a human task or sending to SIS) can access them.

This kind of logic can be implemented in JavaScript inside a ProcessMaker script task or externally in an API/microservice.

These examples showcase the kind of integration code an IT team would write. Depending on the tools at hand, some of these could be done with low-code configuration in ProcessMaker (like using its connectors or built-in integrations), while others might be custom scripts. Next, we will discuss the critical security and data handling considerations to keep in mind when implementing this solution.


Security and Data Handling Considerations

Automating transcript processing involves handling sensitive student academic records, so security and proper data management are paramount. Both Parchment and ProcessMaker provide features to help maintain security, but it’s important for the integrating institution to design and configure the system with best practices in mind:

● Data Privacy & Compliance

Transcripts contain personally identifiable information (PII) and educational records protected under laws like FERPA in the United States. All data flows must comply with such regulations. This means only authorized individuals/systems should access the transcripts and extracted data. ProcessMaker’s role-based access control should be used to ensure that, for example, only admissions personnel or transfer credit evaluators can view transcript content within the workflow. If transcripts include sensitive info like social security numbers (some do), consider masking those in the interface unless absolutely needed.

● Secure Transmission

Ensure that the retrieval of transcripts from Parchment uses secure channels. Parchment’s API and SFTP are encrypted (HTTPS, SFTP over SSH) – do not use any insecure protocols. Verify Parchment’s server certificates and consider pinning if possible. Likewise, any API calls from ProcessMaker to the SIS should use HTTPS with valid certificates. If self-signed certs are used internally, the ProcessMaker environment should trust them to avoid man-in-the-middle risk.

● Encryption at Rest

Determine how transcript files and extracted data are stored. If ProcessMaker is on-premises, transcripts saved on disk or in a database should be encrypted at rest (either via disk encryption or within the application). Parchment PDF transcripts are often digitally signed and sometimes encrypted to prevent tampering (they mention features like “Blue Ribbon Security” for certified PDF transcripts). While ProcessMaker will need to open them for OCR, the files should remain securely stored. If using cloud storage (e.g., if ProcessMaker is cloud-hosted), ensure the cloud provider’s storage encryption is enabled and that access keys are restricted.

● Temporary File Handling

If your integration script writes PDFs to a local directory before feeding to ProcessMaker, make sure to clean up those files after processing to minimize sensitive data lingering on disk. Alternatively, stream the file data directly into ProcessMaker without writing to an intermediate location. If using containerized deployments, be mindful that containers might be ephemeral – ensure the files are either in a persistent, secured volume or immediately processed and discarded.

● API Credentials Security

Both the Parchment API credentials and any SIS API tokens/credentials are highly sensitive. They grant access to student records and potentially allow changes in the SIS. These should never be exposed in code repositories or logs. Use environment variables or secure credential stores. In ProcessMaker, if you need to store them (for example, to call Parchment from a script task), use its secret management features or at least obfuscate them in the code. Rotate these credentials periodically and immediately if a breach is suspected.

● Audit Logging

Leverage logging to maintain an audit trail of transcript processing. Parchment’s system will log when and by whom transcripts are downloaded (and the index file tracks what was delivered). On the ProcessMaker side, enable logging for the workflow steps – e.g., log when a transcript was ingested, when data extraction happened, and when credits were transferred to SIS. This helps in debugging any issues and also in demonstrating compliance (you can show who approved a transfer credit, for instance). Just be cautious not to log actual PII or full transcript text in any system logs – logs should reference transaction IDs or student IDs rather than names or grades to avoid creating new sensitive data stores.

● Access Control and User Permissions

Within ProcessMaker TCE, configure user roles properly. The staff who review transcripts or match courses should only see what they need. For example, if your institution wants to restrict certain data (like GPAs) from some reviewers, consider if the workflow can limit that. The system admin should have total oversight, but the principle of least privilege (PoLP) should apply to everyone else. Parchment admin access (the exchange.parchment.com account) should also be limited to necessary personnel and protected with strong passwords and (if available) two-factor authentication.

● Testing with Dummy Data

When building the integration, use test transcripts (either provided by Parchment in a sandbox or created from real but anonymized data) to ensure your pipeline works end-to-end. Transcripts can vary widely in format; test edge cases like multi-page transcripts, transcripts with non-standard course names, international transcripts (if applicable), etc., to see how the system handles them. This can uncover potential parsing issues or cases where the data extraction might fail. You can then adjust the IDP configurations or add exception handling. For instance, if a transcript is too poor in quality to OCR properly, your workflow might detect that (e.g., if very few courses were extracted) and route it to a manual processing path.

● Performance and Scalability

Ensure your solution can handle peak volumes securely. Parchment might deliver hundreds of transcripts in a short window (e.g., after application deadlines). The integration script and ProcessMaker should be tuned to process these efficiently. Use batch processing where possible and scalable infrastructure (multiple workers for ProcessMaker tasks, etc.). Monitor for any failures – e.g., if Parchment API is unreachable, have a retry mechanism and alert the IT team if transcripts cannot be fetched for a certain time.

● Data Retention and Disposal

Decide how long to keep transcript files and data in ProcessMaker after processing. Some institutions may choose to retain the original transcript PDF in an archive permanently (for record-keeping), perhaps in a secure content management system, and just keep a reference in the SIS that states an official transcript was received on a certain date. Others might delete the PDF once the credits are transferred, to minimize data held. Check institutional policy and configure an automated cleanup if needed. For example, ProcessMaker could have a scheduled job to purge transcript files older than X years, or at least move them to long-term encrypted storage. Make sure such retention plans are in line with regulatory requirements (FERPA doesn’t mandate a specific retention, but your accreditation or state laws might).

● Maintain Parchment Updates

Monitor Parchment’s communications for any API changes or updates. Since we rely on publicly known info, be aware that Parchment may update their API version or deprecate certain integration methods (for instance, moving from older SOAP-based web services to a modern REST API). Ensure your integration can be updated accordingly. Similarly, keep ProcessMaker updated to get the latest security patches and improvements in the IDP engine.

● Fail-safes and Manual Overrides

Despite automation, plan for exceptions. If the ProcessMaker system is down for maintenance, is there a fallback to manually retrieve transcripts from Parchment and process them? If the Parchment API fails (network outage or Parchment service issue), transcripts might pile up – have a procedure to catch up once service is restored. Within the ProcessMaker workflow, include timeouts or escalations: for example, if a human task has been pending too long, notify a manager. These procedures ensure that transcripts don’t get “stuck” unnoticed.

By addressing these security and data handling aspects, the institution can confidently deploy the integration knowing that student data is protected and the process is robust.

Both Parchment and ProcessMaker are designed with security in mind (Parchment, for instance, verifies the legitimacy of all sending institutions and uses secure document technology, and ProcessMaker allows granular access control and audit trails), but it’s the responsibility of the implementers to configure and use these features correctly.


Conclusion

Integrating the Parchment API with ProcessMaker TCE creates a powerful, streamlined solution for managing incoming transcripts and transfer credit evaluations.

We began with an overview of how Parchment delivers transcripts – emphasizing electronic delivery, flexible formats (PDF, XML, etc.), and automation capabilities. Building on that, we described how to authenticate and interact with Parchment’s API to programmatically retrieve transcript files and index data, enabling a hands-off collection process.

From there, we delved into ProcessMaker’s Transfer Credit Evaluation workflow, showing how it can automatically ingest transcripts, extract their data using intelligent document processing, and even perform initial course equivalency matching. A sample workflow was outlined, illustrating the end-to-end automation: retrieving files, extracting data, routing for review of any unmatched courses, and finally importing the credits into the SIS.

We provided example code snippets for key integration points – fetching transcripts, starting ProcessMaker cases, and updating the SIS – to demonstrate the technical implementation.

Finally, we addressed the crucial security considerations, from protecting sensitive data in transit and at rest to implementing proper access controls and audit logs, aligning the solution with FERPA and institutional policies.

Benefits of Integration

The benefits of this integration are significant:

  • Admissions and registrar teams can process transcripts in a fraction of the time it used to take, leading to faster admissions decisions and transfer credit awards.

  • One Parchment client noted that with data automation, their “data entry time has been cut in half” and decisions go out much quicker.

  • By eliminating manual transcript data entry, staff can focus on higher-value tasks and students receive quicker feedback.

  • Moreover, automation reduces errors that often occur with manual entry, improving data accuracy for student records.

For IT professionals, the approach outlined in this paper offers a blueprint using standard, publicly available capabilities: web APIs, secure file transfers, and configurable workflow software. It does not require reinventing the wheel, but rather integrating proven systems (Parchment’s trusted transcript exchange network and ProcessMaker’s flexible BPM platform) in a cohesive manner.

As with any integration project, initial setup and testing are key, but once in production, the system can run with minimal human intervention – and it’s scalable to handle peak loads during admissions seasons.

Institutions considering transcript automation via ProcessMaker and Parchment should:

  • Evaluate their current transcript volumes and pain points

  • Use this guide to formulate a solution design tailored to their environment

  • Engage stakeholders from registrar, admissions, and IT security early on

Both Parchment and ProcessMaker offer professional services and support channels which can be leveraged for detailed implementation guidance beyond the public documentation.


Sources

  1. Parchment Docufide Receiver Data Sheet – Parchment’s platform features for transcript receiving and integration

  2. Parchment + Slate Integration Announcement – Example of automated transcript delivery to an SIS (Slate) with index files

  3. ProcessMaker TCE Product Information – Overview of ProcessMaker’s Transfer Credit Evaluation capabilities

  4. Parchment Receive Premium + Data Automation – Benefits of eliminating manual data entry by extracting transcript data automatically

  5. ProcessMaker Higher Education Documentation – Transfer Credit Evaluation workflow steps (reviewing transcripts, routing unmatched courses)

  6. Parchment Docufide Integration Flyer – Auto-delivery options (SFTP/Web Services) and data formats (PDF, XML, EDI) supported by Parchment

Last updated