Project Name: | Synopse mORMot Framework |
Document Name: | Software Architecture Design |
Document Revision: | 1.18 |
Date: | February 25, 2019 |
Project Manager: | Arnaud Bouchez |
Document License
Synopse mORMot Framework Documentation.
Copyright (C) 2008-2019 Arnaud Bouchez.
Synopse Informatique - http://synopse.info
The Synopse mORMot Framework Source Code is licensed under GPL / LGPL / MPL licensing terms, free to be included in any application.
The Synopse mORMot Framework Documentation is a free document, released under a GPL 3.0 License, distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
Trademark Notice
Rather than indicating every occurrence of a trademarked name as such, this document uses the names only in an editorial fashion and to the benefit of the trademark owner with no intention of infringement of the trademark.
Prepared by: | Title: | Signature: | Date |
Arnaud Bouchez | Project Manager |
The current revision of this document is 1.18.
Name | Description | Rev. | Date |
SWRS | Software Requirements Specifications | 1.18 | February 25, 2019 |
SDD | Software Design Document | 1.18 | February 25, 2019 |
DI | Design Input Product Specifications | 1.18 | February 25, 2019 |
Foreword
Purpose
Responsibilities
GNU General Public License
1. Synopse mORMot Overview
1.1. Client-Server ORM/SOA framework
1.2. Highlights
1.3. Benefits
1.4. Legacy code and existing projects
1.5. FAQ
2. Architecture principles
2.1. General design
2.2. Architecture Design Process
2.3. Model-View-Controller
2.4. Multi-tier architecture
2.5. Service-Oriented Architecture (SOA)
2.6. Object-Relational Mapping (ORM)
2.7. NoSQL and Object-Document Mapping (ODM)
2.8. Domain-Driven Design
2.8.1. Definition
2.8.2. Patterns
2.8.3. Is DDD good for you?
2.8.4. Introducing DDD
3. Enter new territory
3.1. Meet the mORMot
3.2. Main units
4. SynCommons unit
4.1. Unicode and UTF-8
4.2. Currency handling
4.3. TDynArray dynamic array wrapper
4.3.1. TList-like properties
4.3.2. Enhanced features
4.3.3. Capacity handling via an external Count
4.3.4. JSON serialization
4.3.5. Daily use
4.3.6. TDynArrayHashed
4.3.7. TSynDictionary
4.4. TDocVariant custom variant type
4.4.1. TDocVariant documents
4.4.1.1. Variant object documents
4.4.1.2. Variant array documents
4.4.1.3. Create variant object or array documents from JSON
4.4.1.4. Per-value or per-reference
4.4.2. Advanced TDocVariant process
4.4.2.1. Number values options
4.4.2.2. Object or array document creation options
4.4.2.3. Integration with other mORMot units
4.5. Cross-cutting functions
4.5.1. Iso8601 time and date
4.5.1.1. TDateTime and TDateTimeMS
4.5.1.2. TTimeLog
4.5.1.3. TUnixTime
4.5.2. Time Zones
4.5.3. Safe locks for multi-thread applications
4.5.3.1. Protect your resources
4.5.3.2. Fixing TRTLCriticalSection
4.5.3.3. Introducing TSynLocker
4.5.3.4. Inheriting from T*Locked
4.5.3.5. Injecting TAutoLocker instances
4.5.3.6. Injecting IAutoLocker instances
4.5.3.7. Safe locked storage in TSynLocker
4.5.3.8. Thread-safe TSynDictionary
5. Object-Relational Mapping
5.1. TSQLRecord fields definition
5.1.1. Property Attributes
5.1.2. Text fields
5.1.3. Date and time fields
5.1.4. TSessionUserID field
5.1.5. Enumeration fields
5.1.6. Floating point and Currency fields
5.1.7. TSQLRecord fields
5.1.8. TID fields
5.1.9. TRecordReference and TRecordReferenceToBeDeleted
5.1.10. TSQLRecord, TID, TRecordReference deletion tracking
5.1.11. Variant fields
5.1.12. Record fields
5.1.13. BLOB fields
5.1.14. TNullable* fields for NULL storage
5.2. Working with Objects
5.3. Queries
5.3.1. Return a list of objects
5.3.2. Query parameters
5.3.3. Introducing TSQLTableJSON
5.3.4. Note about query parameters
5.4. Automatic TSQLRecord memory handling
5.5. Objects relationship: cardinality
5.5.1. "One to one" or "One to many"
5.5.1.1. TSQLRecord published properties are IDs, not instance
5.5.1.2. Transtyping IDs
5.5.1.3. Automatic instantiation and JOINed query
5.5.2. "Has many" and "has many through"
5.5.2.1. Shared nothing architecture (or sharding)
5.5.2.1.1. Embedding all needed data within the record
5.5.2.1.2. Nesting objects and arrays
5.5.2.1.2.1. TDocVariant and variant fields
5.5.2.1.2.1.1. Schemaless storage via a variant
5.5.2.1.2.1.2. JSON operations from SQL code
5.5.2.1.2.2. Dynamic arrays fields
5.5.2.1.2.2.1. Dynamic arrays from Delphi Code
5.5.2.1.2.2.2. Dynamic arrays from SQL code
5.5.2.1.2.3. TPersistent/TCollection fields
5.5.2.1.2.4. Any TObject, including TObjectList
5.5.2.1.2.5. Sharding on NoSQL engines
5.5.2.2. ORM implementation via pivot table
5.5.2.2.1. Introducing TSQLRecordMany
5.5.2.2.2. Automatic JOIN query
5.6. ORM Data Model
5.6.1. Creating an ORM Model
5.6.2. Several Models
5.6.3. Filtering and Validating
5.7. ORM Cache
5.8. Calculated fields
5.8.1. Setter for TSQLRecord
5.8.2. TSQLRecord.ComputeFieldsBeforeWrite
5.9. Audit Trail for change tracking
5.9.1. Enabling audit-trail
5.9.2. A true Time Machine for your objects
5.9.3. Automatic history packing
5.10. Master/slave replication
5.10.1. Enable synchronization
5.10.2. From master to slave
5.10.3. Real-time synchronization
5.10.4. Replication use cases
6. Daily ORM
6.1. ORM is not Database
6.1.1. Objects, not tables
6.1.2. Methods, not SQL
6.1.3. Think multi-tier
6.2. One ORM to rule them all
6.2.1. Rude class definition
6.2.2. Persist TSQLRecord, not any class
6.2.3. Several ORMs at once
6.2.4. The best ORM is the one you need
7. Database layer
7.1. SQLite3-powered, not SQLite3-limited
7.1.1. SQLite3 as core
7.1.2. Extended by SQLite3 virtual tables
7.1.3. Data access benchmark
7.1.3.1. Software and hardware configuration
7.1.3.2. Insertion speed
7.1.3.3. Reading speed
7.1.3.4. Analysis and use case proposal
7.2. SQLite3 implementation
7.2.1. Statically linked or using external dll
7.2.1.1. Static bcc-compiled .obj
7.2.1.2. Official MinGW-compiled sqlite3.dll
7.2.1.3. Visual C++ compiled sqlite3.dll
7.2.2. Prepared statement
7.2.3. R-Tree inclusion
7.2.4. FTS3/FTS4/FTS5
7.2.4.1. Dedicated FTS3/FTS4/FTS5 record type
7.2.4.2. Stemming
7.2.4.3. FTS searches
7.2.4.4. FTS4 index tables without content
7.2.5. Column collations
7.2.6. REGEXP operator
7.2.7. ACID and speed
7.2.7.1. Synchronous writing
7.2.7.2. File locking
7.2.7.3. Performance tuning
7.2.8. Database backup
7.3. Virtual Tables magic
7.3.1. Virtual Table module classes
7.3.2. Defining a Virtual Table module
7.3.3. Using a Virtual Table module
7.3.4. Virtual Table, ORM and TSQLRecord
7.3.5. In-Memory "static" process
7.3.5.1. In-Memory tables
7.3.5.2. In-Memory virtual tables
7.3.5.3. In-Memory and ACID
7.3.6. Redirect to an external TSQLRest
7.3.7. Virtual Tables to access external databases
7.3.8. Virtual tables from the client side
8. External SQL database access
8.1. SynDB direct RDBMS access
8.1.1. Direct access to any RDBMS engine
8.1.2. Data types
8.1.3. Database types
8.1.4. SynDB Units
8.1.5. SynDB Classes
8.1.6. ISQLDBRows interface
8.1.7. Using properly the ISQLDBRows interface
8.1.8. Late-binding
8.1.9. TDataset and SynDB
8.1.10. TQuery emulation class
8.1.11. Storing connection properties as JSON
8.2. SynDB clients
8.2.1. OleDB or ODBC to rule them all
8.2.2. ZEOS via direct ZDBC
8.2.2.1. The mORMot's best friend
8.2.2.2. Recommended version
8.2.2.3. Connection samples
8.2.3. Oracle via OCI
8.2.3.1. Optimized client library
8.2.3.2. Direct connection without Client installation
8.2.3.3. Oracle Wallet support
8.2.4. SQLite3
8.2.5. DB.pas libraries
8.2.5.1. NexusDB access
8.2.5.2. FireDAC / AnyDAC library
8.2.5.3. UniDAC library
8.2.5.4. BDE engine
8.2.6. Remote access via HTTP
8.2.6.1. Server and Client classes
8.2.6.2. Publish a SynDB connection over HTTP
8.2.6.3. SynDB client access via HTTP
8.2.6.4. Advanced use cases
8.2.6.5. Integration with SynDBExplorer
8.2.6.6. Do not forget the mORMot!
8.3. SynDB ORM Integration
8.3.1. Code-first or database-first
8.3.2. Code-first ORM
8.3.3. Database-first ORM
8.3.4. Sharing the database with legacy code
8.3.5. Auto-mapping of SQL conflictual field names
8.3.6. External database ORM internals
8.3.7. Tuning the process
9. External NoSQL database access
9.1. SynMongoDB client
9.1.1. Connecting to a server
9.1.2. Adding some documents to the collection
9.1.3. Retrieving the documents
9.1.3.1. Updating or deleting documents
9.1.4. Write Concern and Performance
9.2. MongoDB + ORM = ODM
9.2.1. Define the TSQLRecord class
9.2.2. Register the TSQLRecord class
9.2.3. ORM/ODM CRUD methods
9.2.4. ODM complex queries
9.2.5. BATCH mode
9.2.6. ORM/ODM performance
10. JSON RESTful Client-Server
10.1. JSON
10.1.1. Why use JSON?
10.1.2. Values serialization
10.1.3. Record serialization
10.1.3.1. Automatic serialization via Enhanced RTTI
10.1.3.2. Serialization for older Delphi versions
10.1.3.2.1. Default Binary/Base64 serialization
10.1.3.2.2. Custom serialization
10.1.3.2.3. Defining callbacks
10.1.3.2.4. Text-based definition
10.1.4. Dynamic array serialization
10.1.4.1. Standard JSON arrays
10.1.4.2. Customized serialization
10.1.5. TSQLRecord TPersistent TStrings TRawUTF8List
10.1.6. TObject serialization
10.1.6.1. Custom class serialization
10.1.6.2. Custom field names serialization
10.1.6.3. TObjectList serialization
10.2. REST
10.2.1. What is REST?
10.2.1.1. Resource-based
10.2.1.2. Unique Identifier
10.2.1.3. Interfaces
10.2.1.4. By Representation
10.2.1.5. Stateless
10.2.2. RESTful mORMot
10.2.2.1. BLOB fields
10.2.2.2. JSON representation
10.2.2.3. Stateless ORM
10.3. REST and JSON
10.3.1. JSON format density
10.3.2. JSON (not) expanded layouts
10.3.3. JSON global cache
11. Client-Server process
11.1. TSQLRest classes
11.1.1. Server classes
11.1.2. Storage classes
11.1.3. Client classes
11.2. In-process/stand-alone application
11.3. Local access via named pipes or Windows messages
11.4. Network and Internet access via HTTP
11.4.1. HTTP server(s)
11.4.2. High-performance http.sys server
11.4.2.1. Use the http.sys server
11.4.2.2. URI authorization as Administrator
11.4.2.2.1. Secure specific authorization
11.4.2.2.2. Automatic authorization
11.4.2.2.3. Manual URI authorization
11.4.2.3. HTTP API 2.0 Features
11.4.3. HTTP client(s)
11.4.4. HTTPS server
11.4.4.1. Certificates
11.4.4.2. Configure a Port with an SSL certificate
11.4.5. Custom Encodings
11.4.5.1. SynLZ/deflate compression
11.4.5.2. AES encryption over HTTP
11.4.5.3. Prefer WebSockets between mORMot nodes
11.5. Thread-safety
11.5.1. Thread safe design
11.5.2. Advanced threading settings
11.5.3. Proven behavior
11.5.4. Highly concurrent clients performance
12. Client-Server ORM
12.1. ORM as local or remote
12.2. Stateless design
12.2.1. Server side synchronization
12.2.2. Client side synchronization
12.2.3. Let applications be responsive
12.3. BATCH sequences for adding/updating/deleting records
12.3.1. BATCH process
12.3.2. Transmitted JSON
12.3.3. Unit Of Work pattern
12.3.3.1. Several Batches
12.3.3.2. Updating only the mapped fields
12.3.4. Local network as bottleneck
12.3.4.1. Array binding
12.3.4.1.1. For faster BATCH mode
12.3.4.1.2. For faster IN clause
12.3.4.2. Optimized SQL for bulk insert
12.4. CRUD level cache
12.4.1. Where to cache
12.4.2. When to cache
12.4.3. What to cache
12.4.4. How to cache
12.4.5. Business logic and API cache
13. Server side SQL/ORM process
13.1. Optimize for performance
13.1.1. Profiling your application
13.1.2. Client-side caching
13.1.3. Write business logic as Services
13.1.4. Using ORM full power
13.2. Stored procedures
13.2.1. Why to avoid stored procedures
13.2.2. Stored procedures, anyway
13.2.2.1. Custom SQL functions
13.2.2.1.1. Implementing a function
13.2.2.1.2. Registering a function
13.2.2.1.2.1. Direct low-level SQLite3 registration
13.2.2.1.2.2. Class-driven registration
13.2.2.1.2.3. Custom class definition
13.2.2.2. Low-level SQLite3 stored procedure in Delphi
13.2.2.3. External stored procedure
13.3. Server side Services
14. Client-Server services via methods
14.1. Publishing a service on the server
14.2. Defining the client
14.3. Direct parameter marshalling on server side
14.4. Returns non-JSON content
14.5. Advanced process on server side
14.6. Browser speed-up for unmodified requests
14.7. Returning file content
14.8. JSON Web Tokens (JWT)
14.9. Handling errors
14.10. Benefits and limitations of this implementation
15. Interfaces
15.1. Delphi and interfaces
15.1.1. Declaring an interface
15.1.2. Implementing an interface with a class
15.1.3. Using an interface
15.1.4. There is more than one way to do it
15.1.5. Here comes the magic
15.2. SOLID design principles
15.2.1. Single Responsibility Principle
15.2.1.1. Splitting classes
15.2.1.2. Do not mix UI and logic
15.2.2. Open/Closed Principle
15.2.2.1. Applied to our framework units
15.2.2.2. Open/Closed in practice
15.2.2.3. No Singleton nor global variables
15.2.3. Liskov Substitution Principle
15.2.3.1. Use parent classes
15.2.3.2. I'm your father, Luke
15.2.3.3. Don't check the type at runtime
15.2.3.4. Partially abstract classes
15.2.3.5. Messing units dependencies
15.2.3.6. Practical advantages
15.2.4. Interface Segregation Principle
15.2.4.1. Consequence of the other principles
15.2.4.2. Using interfaces
15.2.5. Dependency Inversion Principle
15.2.5.1. Upside Down Development
15.2.5.2. Injection patterns
15.3. Circular reference and (zeroing) weak pointers
15.3.1. Weak pointers
15.3.2. Handling weak pointers
15.3.3. Zeroing weak pointers
15.3.4. Weak pointers functions implementation details
15.4. Interfaces in practice: dependency injection, stubs and mocks
15.4.1. Dependency Injection at constructors
15.4.2. Why use fake / emulated interfaces?
15.4.2.1. Stubs and mocks
15.4.2.2. Defining stubs
15.4.2.3. Defining a mock
15.4.2.4. Running the test
15.5. Stubs and Mocks in mORMot
15.5.1. Direct use of interface types without TypeInfo()
15.5.2. Manual dependency injection
15.5.3. Stubbing complex return values
15.5.4. Stubbing via a custom delegate or callback
15.5.4.1. Delegate with named variant parameters
15.5.4.2. Delegate with indexed variant parameters
15.5.4.3. Delegate with JSON parameters
15.5.4.4. Accessing the test case when mocking
15.5.5. Calls tracing
15.6. Dependency Injection and Interface Resolution
16. Client-Server services via interfaces
16.1. Implemented features
16.2. How to make services
16.3. Defining a service contract
16.3.1. Define an interface
16.3.2. Service Methods Parameters
16.3.3. TPersistent / TSQLRecord parameters
16.3.4. Record parameters
16.3.5. TCollection parameters
16.3.5.1. Use of TCollection
16.3.5.2. Inherit from TInterfacedCollection
16.3.5.3. Register a TCollection type
16.4. Server side
16.4.1. Implementing the service contract
16.4.2. Set up the Server factory
16.4.3. Instances life time implementation
16.4.4. Accessing low-level execution context
16.4.4.1. Retrieve information from the global ServiceContext
16.4.4.2. Implement your service from TInjectableObjectRest
16.4.5. Using services on the Server side
16.5. Client side
16.5.1. Set up the Client factory
16.5.2. Using services on the Client side
16.6. Sample code
16.6.1. The shared contract
16.6.2. The server sample application
16.6.3. The client sample application
16.6.4. Enhanced sample: remote SQL access
16.7. Asynchronous callbacks
16.7.1. WebSockets support
16.7.1.1. Using a "Saga" callback to notify long term end-of-process
16.7.1.2. Client service consumption
16.7.1.3. Server side implementation
16.7.2. Publish-subscribe for events
16.7.2.1. Defining the interfaces
16.7.2.2. Writing the Publisher
16.7.2.3. Consuming the service from the Subscriber side
16.7.2.4. Subscriber multiple redirection
16.7.2.5. Proper threaded implementation
16.7.2.6. Interacting with UI/VCL
16.7.3. Interface callbacks instead of class messages
16.7.3.1. Using service and callback interfaces
16.7.3.2. Classical message(s) event
16.7.3.3. Workflow adaptation
16.7.3.4. From interfaces comes abstraction and ease
16.8. Implementation details
16.8.1. Error handling
16.8.2. Security
16.8.3. Implementation class types
16.8.4. Server-side execution options (threading)
16.8.5. Audit Trail for Services
16.8.5.1. When logging is not enough
16.8.5.2. Tracing Service Methods
16.8.5.3. Tracing Asynchronous External Calls
16.8.6. Transmission content
16.8.6.1. Request format
16.8.6.1.1. REST mode
16.8.6.1.1.1. Parameters transmitted as JSON array
16.8.6.1.1.2. Parameters transmitted as JSON object
16.8.6.1.1.3. Parameters encoded at URI level
16.8.6.1.1.4. Sending a JSON object
16.8.6.1.1.5. Sending raw binary
16.8.6.1.2. JSON-RPC
16.8.6.1.2.1. Parameters transmitted as JSON array
16.8.6.1.2.2. Parameters transmitted as JSON object
16.8.6.1.3. REST mode or JSON-RPC mode?
16.8.6.2. Response format
16.8.6.2.1. Standard answer as JSON object
16.8.6.2.1.1. JSON answers
16.8.6.2.1.1.1. Returning as JSON array
16.8.6.2.1.1.2. Returning a JSON object
16.8.6.2.1.2. Returning raw JSON content
16.8.6.2.1.3. Returning errors
16.8.6.2.2. Returning content as XML
16.8.6.2.2.1. Always return XML content
16.8.6.2.2.2. Return XML content on demand
16.8.6.2.3. Custom returned content
16.9. Comparison with WCF
17. Cross-Platform clients
17.1. Available client platforms
17.1.1. Delphi FMX / FreePascal FCL cross-platform support
17.1.1.1. Cross-platform JSON
17.1.1.2. Delphi OSX and NextGen
17.1.1.3. FreePascal clients
17.1.1.4. Local or remote logging
17.1.2. Smart Mobile Studio support
17.1.2.1. Beyond JavaScript
17.1.2.2. Using Smart Mobile Studio with mORMot
17.1.3. Remote logging
17.2. Generating client wrappers
17.2.1. Publishing the code generator
17.2.2. Delphi / FreePascal client samples
17.2.2.1. Connection to the server
17.2.2.2. CRUD/ORM remote access
17.2.2.3. Service consumption
17.2.3. Smart Mobile Studio client samples
17.2.3.1. Adding two numbers in AJAX
17.2.3.2. CRUD/ORM remote access
18. MVC pattern
18.1. Model
18.2. Views
18.2.1. Desktop clients
18.2.1.1. RTTI
18.2.1.2. User Interface
18.2.2. Web clients
18.2.2.1. Mustache template engine
18.2.2.2. Mustache principles
18.2.2.3. Mustache templates
18.2.2.3.1. Variables
18.2.2.3.2. Sections
18.2.2.3.3. Inverted Sections
18.2.2.3.4. Partials
18.2.2.4. SynMustache unit
18.2.2.4.1. Variables
18.2.2.4.2. Sections
18.2.2.4.3. Partials
18.2.2.4.4. Expression Helpers
18.2.2.4.5. Internationalization
18.2.2.5. Low-level integration with method-based services
18.2.2.6. MVC/MVVM Design
19. MVC/MVVM Web applications
19.1. MVCModel
19.1.1. Data Model
19.1.2. Hosted in a REST server over HTTP
19.2. MVCViewModel
19.2.1. Defining the commands
19.2.2. Implementing the Controller
19.2.3. Variable input parameters
19.2.4. Using Services in the Controller
19.2.5. Controller Thread Safety
19.2.6. Web Sessions
19.3. Writing the Views
20. Hosting
20.1. Windows and Linux hosted
20.2. Deployment Architecture
20.2.1. Shared server
20.2.2. Two servers
20.2.3. Two instances on the same server
20.2.4. Scaling via CDN
21. Security
21.1. Authentication
21.1.1. Principles
21.1.1.1. HTTP basic auth over HTTPS
21.1.1.2. Session via Cookies
21.1.1.3. Query Authentication
21.1.2. Framework authentication
21.1.2.1. Per-User authentication
21.1.2.2. Session handling
21.1.3. Authentication schemes
21.1.3.1. Class-driven authentication
21.1.3.2. mORMot secure RESTful authentication
21.1.3.3. Authentication using Windows credentials
21.1.3.3.1. Windows Authentication
21.1.3.3.2. Using NTLM or Kerberos
21.1.3.4. Weak authentication
21.1.3.5. HTTP Basic authentication
21.1.4. Clients authentication
21.1.4.1. Client interactivity
21.1.4.2. Authentication using AJAX
21.1.5. JWT Authentication
21.2. Authorization
21.2.1. Per-table access rights
21.2.2. Additional safety
21.2.2.1. SQL remote execution
21.2.2.2. Service execution
22. Scripting Engine
22.1. Scripting abilities
22.2. SpiderMonkey integration
22.2.1. A powerful JavaScript engine
22.2.2. Direct access to the SpiderMonkey API
22.2.3. Execution scheme
22.2.4. Creating your execution context
22.2.5. Blocking threading model
22.3. Interaction with existing code
22.3.1. Proper engine initialization
22.3.2. Calling Delphi code from JavaScript
22.3.3. TSMVariant custom type
22.3.4. Calling JavaScript code from Delphi
23. Asymmetric Encryption
23.1. Public-key Cryptography
23.1.1. Keys Generation and Distribution
23.1.2. Message Authentication
23.1.3. Certificates and Public Key Infrastructure
23.1.4. Message encryption
23.2. Elliptic Curve Cryptography
23.2.1. Introducing SynEcc
23.2.2. ECC command line tool
23.2.3. Keys and Certificates Generation
23.2.4. TECCCertificate and TECCCertificateSecret
23.2.5. File Signature
23.2.6. Signing in Code
23.2.7. File Encryption
23.2.8. Private Keys Passwords Cheat Mode
23.2.9. Encryption in Code
23.3. Application Locking
23.3.1. From the User perspective
23.3.2. From the Support Team perspective
23.3.3. Benefits of Asymmetric Encryption for License management
24. Domain-Driven-Design
24.1. Domain
24.2. Modeling
24.2.1. Several Models to rule them all
24.2.2. The state of the model
24.2.3. Composition
24.3. DDD model
24.3.1. Ubiquitous Language
24.3.2. Value Objects and Entities
24.3.3. Aggregates
24.3.4. Factory and Repository patterns
24.3.5. DTO and Events to avoid domain leaking
24.3.6. Services
24.3.7. Clean Uncoupled Architecture
24.4. mORMot's DDD
24.4.1. Designer's commitments
24.4.2. Defining objects in Delphi
24.4.3. Defining DDD objects in mORMot
24.4.3.1. Use framework types for DDD objects
24.4.3.2. Define uncoupled DDD objects
24.4.3.3. Specialize your simple types
24.4.3.4. Define your PODO classes
24.4.3.5. Store your Entities in CQRS Repositories
24.4.3.5.1. CQRS Interfaces
24.4.3.5.2. Queries Interface
24.4.3.5.3. Commands Interface
24.4.3.5.4. Automated Repository using the ORM
24.4.3.5.4.1. DDD / ORM mapping
24.4.3.5.4.2. Define the Factory
24.4.3.5.4.3. Implement the CQRS methods
24.4.3.6. Isolate your Domain using DTOs
24.4.4. Defining services
24.4.5. Event-Driven Design
24.4.5.1. Events as Callbacks
24.4.5.2. Event Sourcing via Event Oriented Databases
24.4.6. Building a Clean architecture
25. Testing and logging
25.1. Automated testing
25.1.1. Involved classes in Unitary testing
25.1.2. First steps in testing
25.1.3. Framework test coverage
25.2. Enhanced logging
25.2.1. Setup logging
25.2.2. Call trace
25.2.3. Including symbol definitions
25.2.4. Exception handling
25.2.5. Serialization
25.2.6. Multi-threaded applications
25.2.7. Log to the console
25.2.8. Remote logging
25.2.9. Log to third-party libraries
25.2.10. Automated log archival
25.2.11. Log files rotation
25.2.12. Integration within tests
25.2.13. Log Viewer
25.2.13.1. Open log files
25.2.13.2. Log browser
25.2.13.3. Customer-side profiler
25.2.13.4. Per-thread inspection
25.2.13.5. Server for remote logging
25.2.14. Framework log integration
26. Source code
26.1. License
26.1.1. Three Licenses Model
26.1.2. Publish modifications and credit for the library
26.1.3. Derivate Open Source works
26.1.4. Commercial licenses
26.2. Availability
26.2.1. Obtaining the Source Code
26.2.2. Expected compilation platform
26.2.3. 32-bit sqlite3*.obj and 64-bit SQLite3 dll
26.2.4. SpiderMonkey library
26.2.5. Folder layout
26.2.5.1. Root folder
26.2.5.2. SynDBDataset folder
26.2.5.3. SQLite3 folder
26.2.5.4. CrossPlatform folder
26.3. Delphi Installation
26.3.1. Manual download
26.3.2. Get from GitHub
26.3.3. Setup the Delphi IDE
26.4. FreePascal / Lazarus Installation
26.4.1. Possible targets
26.4.2. Setup your dedicated FPC / Lazarus environment with fpcupdeluxe
26.4.3. Missing RTTI for interfaces in old FPC 2.6
26.4.4. Writing your project for FPC
26.4.5. Linux VM installation tips
26.5. CrossKylix support
26.5.1. What is Cross-Kylix?
26.5.2. Running Kylix 32-bit executables on 64-bit Linux
26.6. Upgrading from a 1.17 revision
28. SynFile application
28.1. General architecture
28.2. Database design
28.3. Client Server implementation
28.4. User Interface generation
28.4.1. Rendering
28.4.2. Enumeration types
28.4.3. ORM Registration
28.4.4. Main window
28.5. Report generation
28.6. Application i18n and L10n
28.6.1. Creating the reference file
28.6.2. Adding a new language
28.6.3. Language selection
28.6.4. Manual translation
28.6.5. TForm / TFrame hook
28.6.6. Localization
30. SWRS implications
Software Architecture Design Reference Table
30.1. Client Server ORM/SOA framework
30.1.1. SWRS # DI-2.1.1
30.1.2. SWRS # DI-2.1.1.1
30.1.3. SWRS # DI-2.1.1.2.1
30.1.4. SWRS # DI-2.1.1.2.2
30.1.5. SWRS # DI-2.1.1.2.3
30.1.6. SWRS # DI-2.1.1.2.4
30.1.7. SWRS # DI-2.1.2
30.1.8. SWRS # DI-2.1.3
30.1.9. SWRS # DI-2.1.4
30.1.10. SWRS # DI-2.1.5
30.2. SQlite3 engine
30.2.1. SWRS # DI-2.2.1
30.2.2. SWRS # DI-2.2.2
30.3. User interface
30.3.1. SWRS # DI-2.3.1.1
30.3.2. SWRS # DI-2.3.1.2
30.3.3. SWRS # DI-2.3.1.3
30.3.4. SWRS # DI-2.3.2
The following table is a quick-reference guide to all the Pictures referenced in this Software Architecture Design (SAD) document.
Pictures | |
---|---|
Adopt a mORMot | 1 2 3 4 |
Adopt a mORMot | 1 2 3 |
Adopt a mORMot | 1 2 3 |
Adopt a mORMot | 1 2 3 |
Adopt a mORMot | 1 2 3 |
Adopt a mORMot | 1 2 3 |
Adopt a mORMot | 1 2 3 |
Adopt a mORMot | 1 2 3 4 |
Client-Server implementation - Client side | 1 |
Domain Driven Design n-Tier Architecture - Physical View | 1 |
Domain-Driven Design - Building Blocks | 1 2 |
Domain Driven Design n-Tier Architecture - Logical View | 1 |
Architecture Iterative Process (SCRUM) | 1 |
Multi-Tier Architecture - Logical View | 1 |
Service Oriented Architecture - Logical View | 1 |
Service Oriented Architecture - Logical View of Composition | 1 |
Strings in Domain Driven Design n-Tier Architecture | 1 |
Two-Tier Architecture - Logical View | 1 |
Client-Server implementation - Server side | 1 |
Client-Server implementation - Server side | 1 |
Client-Server implementation - Server side with Virtual Tables | 1 |
Client-Server implementation - Stand-Alone application | 1 |
Client-Server implementation - Server side with "static" Virtual Tables | 1 |
Asymmetric Encryption Scheme | 1 |
Asymmetric Key Generation | 1 |
Asymmetric Digital Signature | 1 |
Asymmetric Sign-Then-Encrypt Scheme | 1 |
BATCH mode Client-Server latency | 1 |
BATCH mode latency issue on external DB | 1 |
HTTP/1.1 Client architecture | 1 |
RESTful Client classes | 1 |
HTTP/1.1 Client RESTful classes | 1 |
RESTful Client-Server classes | 1 |
AuditTrail Record Layout | 1 |
AuthGroup Record Layout | 1 |
AuthUser Record Layout | 1 |
Data Record Layout | 1 |
History Record Layout | 1 |
Memo Record Layout | 1 |
SafeData Record Layout | 1 |
SafeMemo Record Layout | 1 |
ServiceLog Record Layout | 1 |
ServiceNotifications Record Layout | 1 |
CQRS Repository Service Interface for TUser | 1 |
CQRS Dogmatic Repository Service Interface for TUser | 1 |
CQRS Class Hierarchy Mapping for ORM and DDD Entities | 1 |
Application Unlocking via Asymmetric Cryptography | 1 |
Application Unlocking on Two Computers | 1 |
Application Unlocking for Two Users | 1 |
Design Inputs, FMEA and Risk Specifications | 1 |
FTS ORM classes | 1 |
ESQLQueryException classes hierarchy | 1 |
External Databases classes hierarchy | 1 |
SynFile TSQLRecord classes hierarchy | 1 |
TJWTAbstract classes hierarchy | 1 |
TOleDBStatement classes hierarchy | 1 |
TSQLDataBaseSQLFunction classes hierarchy | 1 |
TSQLDBConnectionProperties classes hierarchy | 1 |
SynDB Remote access Server classes hierarchy | 1 |
TSQLDBSQLite3Connection classes hierarchy | 1 |
SynDB Remote access Client classes hierarchy | 1 |
TSQLite3Library classes hierarchy | 1 |
Custom Virtual Tables records classes hierarchy | 1 |
TSQLRestClient classes hierarchy | 1 |
TSQLRestServerAuthentication classes hierarchy | 1 |
Routing via TSQLRestServerURIContext classes hierarchy | 1 |
Virtual Tables classes hierarchy | 1 |
Filtering and Validation classes hierarchy | 1 |
Default filters and Validation classes hierarchy | 1 |
TSynTest classes hierarchy | 1 |
MongoDB TTestDirect classes hierarchy | 1 |
TWebSocketProtocolJSON classes hierarchy | 1 |
Corporate Servers Redirection | 1 |
THttpServerGeneric classes hierarchy | 1 |
CRUD caching in mORMot | 1 |
mORMot Persistence Layer Architecture | 1 2 3 |
Clean Domain-Oriented Architecture of mORMot | 1 |
Alternate Domain-Oriented Architecture of mORMot | 1 |
General mORMot architecture - Client / Server | 1 |
General mORMot architecture - Stand-alone application | 1 |
General mORMot architecture - Client Server implementation | 1 |
General mORMot architecture - Cross-Cutting features | 1 |
Clean Uncoupled Domain-Oriented Architecture | 1 |
General mORMot architecture | 1 |
mORMot MVC/MVVM URI - Commands sequence | 1 |
Why a Client-Server ORM | 1 |
ORM mapping | 1 |
ORM Process | 1 |
Service Hosting on mORMot - shared server | 1 |
Service Hosting on mORMot - two servers | 1 |
Service Hosting on mORMot - one server, two instances | 1 |
Service Hosting on mORMot - Content Delivery Network (CDB) | 1 |
mORMot Source Code Folders | 1 |
mORMot Source Code Main Units | 1 |
Model View Controller process | 1 |
Model View Controller concept | 1 |
MVC Web and Rich Clients | 1 |
SQL Aggregate via JOINed tables | 1 |
NoSQL Aggregate as one document | 1 |
NoSQL Graph Database | 1 |
ORM Replication Classes via REST | 1 |
ORM Cascaded Replication Classes via REST | 1 |
ORM Real-Time Replication Classes | 1 |
Corporate Servers Replication | 1 |
Corporate Servers Master/Slave Replication With All Data On Main Server | 1 |
Corporate Servers Master/Slave Replication With Private Local Data | 1 |
Corporate Servers Master/Slave Replication With CQRS | 1 |
Publish-Subscribe Pattern | 1 |
Chat Application using Publish-Subscribe | 1 |
Unit dependencies in the "Lib\CrossPlatform" directory | 1 |
Unit dependencies in the "Lib\SQLite3\DDD\dom" directory | 1 |
Unit dependencies in the "Lib\SQLite3\DDD\infra" directory | 1 |
Unit dependencies in the "Lib\SQLite3" directory | 1 |
Unit dependencies in the "Lib\SynDBDataset" directory | 1 |
Unit dependencies in the "Lib" directory | 1 |
Unit dependencies in the "Lib\SQLite3\Samples\MainDemo" directory | 1 |
Unit dependencies in the "Lib\SQLite3" directory | 1 |
Unit dependencies in the "Lib" directory | 1 |
RESTful Server classes | 1 |
SOLID Principles - Single Responsibility: Single-to-rule-them-all class | 1 |
SOLID Principles - Single Responsibility: Abstract parent class | 1 |
SOLID Principles - Single Responsibility: Spliting protocol and communication | 1 |
RESTful storage classes | 1 |
SynDB First Level Providers | 1 |
SynDB and BDE | 1 |
SynDB and FireDAC / AnyDAC | 1 |
SynDB Architecture | 1 |
SynDB and NexusDB | 1 |
Oracle Connectivity with SynDBOracle | 1 |
SynDB and ODBC | 1 |
SynDB and OleDB | 1 |
SynDB Remote access Overview | 1 |
SynDB, mORMot and SQLite3 | 1 |
SynDB and UniDAC | 1 |
SynDB and Zeos / ZDBC | 1 |
TSQLRecordPeopleExt Database-First Field/Column Mapping | 1 |
TSQLRecordPeopleExt Code-First Field/Column Mapping | 1 |
ORM Access Via REST | 1 |
ORM Access Via Virtual Table | 1 |
Meet the mORMot | 1 2 |
Smart Mobile Studio Calculator Sample | 1 |
User Interface generated using TMS components | 1 |
User Interface generated using VCL components | 1 |
The following table is a quick-reference guide to all the Source code File Names referenced in this Software Architecture Design (SAD) document.
Source code File Names | |
---|---|
Lib\mORMot | 1 2 |
Lib\mORMotDB | 1 |
Lib\mORMotSQLite3 | 1 |
Lib\SQLite3\mORMot | 1 2 |
Lib\SQLite3\mORMoti18n | 1 2 3 |
Lib\SQLite3\mORMotReport | 1 |
Lib\SQLite3\mORMotUIEdit | 1 |
Lib\SQLite3\Samples\MainDemo\FileClient | 1 2 |
Lib\SQLite3\Samples\MainDemo\FileEdit | 1 |
Lib\SQLite3\Samples\MainDemo\FileMain | 1 2 |
Lib\SQLite3\Samples\MainDemo\FileServer | 1 |
Lib\SQLite3\Samples\MainDemo\FileTables | 1 2 |
Lib\SynCommons | 1 |
Lib\SynCrypto | 1 |
Lib\SynDB.pas | 1 |
Lib\SynGdiPlus | 1 2 |
Lib\SynSQLite3 | 1 2 |
Lib\SynTests | 1 |
The following table is a quick-reference guide to all the Keywords referenced in this Software Architecture Design (SAD) document.
.
.msg | 1 |
5
53-bit | 1 2 |
6
64-bit | 1 2 3 4 5 6 7 |
A
ACID | 1 2 3 4 |
Adapter | 1 2 3 |
AES-NI | 1 2 3 |
AES | 1 2 3 4 5 6 7 8 |
Aggregate root | 1 2 3 |
Aggregates | 1 2 3 4 5 6 7 8 |
Aggregation | 1 2 3 |
AJAX | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 |
Android | 1 2 3 4 |
Anemic domain model | 1 2 |
Anti-Corruption Layer | 1 |
AnyDAC | 1 2 3 4 5 6 7 8 9 |
ARC | 1 |
Array bind | 1 2 3 4 5 6 7 8 |
AsTSQLRecord | 1 |
Asymmetric | 1 2 |
AS_UNIQUE | 1 2 3 4 5 6 |
Atomic | 1 2 |
ATTACH DATABASE | 1 2 |
Audit Trail | 1 2 |
Authentication | 1 2 3 4 5 6 7 8 9 10 11 |
Authorization | 1 2 |
AutomaticTransactionPerRow | 1 2 |
B
Backup | 1 |
Base64 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
BATCH | 1 2 3 4 5 6 7 |
BDE | 1 2 3 4 5 6 7 8 9 10 |
Behavior-Driven Development | 1 |
Benchmark | 1 |
BigData | 1 |
BinToBase64WithMagic | 1 |
BLOB | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
BootStrap | 1 2 |
Bounded Context | 1 2 3 |
BSON | 1 2 3 4 5 6 7 8 |
Business rules | 1 |
By-reference | 1 |
By-value | 1 |
C
CA | 1 |
Cache | 1 2 3 4 5 6 7 8 9 |
Camel | 1 2 3 4 |
Cardinality | 1 2 |
CDN | 1 2 3 4 |
Class | 1 |
Clean Architecture | 1 |
Client-Server | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 |
Cloud | 1 2 3 4 5 6 7 |
Code-first | 1 |
Collation | 1 2 3 |
Commands | 1 |
Contract | 1 2 |
Convention over configuration | 1 2 3 4 5 6 |
CQRS | 1 2 3 4 5 6 7 8 9 |
Crc32 | 1 2 3 4 |
Crc32c | 1 |
CreateAndFillPrepare | 1 2 |
CreateJoined | 1 2 3 4 |
Critical Section | 1 2 |
Cross-platform | 1 2 |
CrossKylix | 1 2 3 |
CRUD | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 |
CSS 3 | 1 |
Currency | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
D
DAO | 1 |
Data Access Objects | 1 |
Data Transfer Object | 1 |
Database-first | 1 |
DateTimeToSQL | 1 |
DateToSQL | 1 |
DB2 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
DBExpress | 1 2 3 4 5 6 |
DDD | 1 2 |
Deadlock | 1 2 3 4 |
Delphi | 1 2 |
Denormalization | 1 2 3 |
Dependency injection | 1 2 3 |
Dependency inversion | 1 2 |
Digital signature | 1 |
DMZ | 1 |
Domain-Driven | 1 2 3 4 5 6 7 8 9 10 11 |
Domain | 1 2 |
Double | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
DTO | 1 2 3 4 5 6 7 8 9 |
Dual-phase | 1 2 3 |
DvoAllowDoubleValue | 1 |
Dynamic array | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 |
E
E2201 | 1 |
ECC | 1 2 3 |
ECCAuthorize | 1 |
ECDSA | 1 |
ECIES | 1 |
Elliptic curve | 1 |
Encryption | 1 2 3 4 5 6 7 8 |
Enhanced RTTI | 1 2 3 4 5 |
Entity Objects | 1 2 3 4 5 6 7 |
Enumerated | 1 2 3 4 |
Event Oriented Persistence | 1 2 |
Event Sourcing | 1 2 3 |
Event-Driven | 1 2 3 4 5 6 7 |
Events | 1 2 |
Expression Helper | 1 2 |
Extended syntax, | 1 |
Extended syntax | 1 2 3 4 5 6 7 8 9 |
F
Factory | 1 2 3 4 5 6 7 8 9 10 11 12 |
FAQ | 1 |
FastMM4 | 1 2 3 4 5 6 7 8 |
Filtering | 1 |
Firebird | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
FireDAC | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
FireMonkey | 1 2 3 |
Firewall | 1 2 |
FMX | 1 |
ForceBlobTransfert | 1 2 3 |
FPC | 1 2 3 4 5 6 7 8 9 10 11 12 13 |
Fpcupdeluxe | 1 |
FreePascal | 1 2 3 4 5 6 7 8 9 10 |
FTS | 1 2 3 4 5 6 |
Full Text | 1 2 3 |
G
Garbage collector | 1 2 3 4 |
Gateway | 1 2 |
General Public License | 1 2 |
GitHub | 1 2 3 |
GridFS | 1 |
GUID | 1 2 3 4 5 6 7 8 |
H
Has many through | 1 2 |
Has many | 1 |
Hexagonal architecture | 1 2 |
HMAC-SHA256 | 1 2 3 4 |
HMAC-SHA2 | 1 |
HMAC | 1 |
Hosting | 1 2 3 4 |
HTML 5 | 1 |
Http.sys | 1 2 3 4 5 6 |
HTTP | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 |
HTTPS | 1 2 3 4 5 6 7 8 9 |
HTTP_RESP_STATICFILE | 1 |
I
I18n | 1 2 3 4 5 6 7 8 9 10 11 |
Index | 1 2 3 4 5 6 7 |
Indy | 1 |
Informix | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
IntegerDynArrayContains | 1 2 3 |
Interface | 1 2 3 4 5 6 7 8 9 |
Inversion Of Control | 1 2 3 4 |
IoC | 1 2 3 4 5 6 7 8 9 10 |
IOS | 1 |
IPad | 1 2 3 |
IPhone | 1 2 3 |
ISO 8601 | 1 2 3 4 5 6 7 8 9 10 |
ISQLDBRows | 1 |
J
JavaScript | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
Jet/MSAccess | 1 2 3 4 5 6 7 8 9 10 11 12 |
JOIN | 1 2 3 4 5 6 7 8 |
JSON-RPC | 1 2 3 |
JSON | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 |
JWT | 1 2 3 |
K
Kerberos | 1 2 |
KISS | 1 2 3 4 5 6 7 |
L
L10n | 1 |
Late-binding | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
Layer Supertype | 1 2 3 |
Lazarus | 1 2 3 4 5 6 |
Lazy loading | 1 2 3 |
Lesser General Public License | 1 |
Libcurl | 1 2 |
License | 1 2 |
Linux | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
Liskov substitution principle | 1 |
Log | 1 2 3 4 5 6 7 |
M
Many to many | 1 |
Map/reduce | 1 2 3 4 5 6 7 |
MapAutoKeywordFields | 1 |
Mapping | 1 |
Master/Detail | 1 2 3 4 5 6 7 |
Master/slave | 1 2 |
Message bus | 1 |
MIME | 1 2 |
Mock | 1 2 3 4 5 6 |
Model | 1 2 3 4 5 6 7 8 |
MongoDB | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 |
Mozilla Public License | 1 |
MS SQL | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 |
Multi-thread | 1 2 3 4 5 6 |
Mustache | 1 2 3 4 5 6 7 8 9 10 11 12 |
MVC | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
MVVM | 1 2 3 4 |
MySQL | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 |
N
N-Tier | 1 2 3 |
Nano Server | 1 |
NexusDB | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 |
Node.js | 1 |
Normalization | 1 2 |
NoSQL | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 |
NTLM | 1 2 |
NULL | 1 |
O
ODBC | 1 2 3 4 5 6 7 8 9 10 11 12 |
ODM | 1 2 3 4 5 6 |
OIC | 1 2 |
OleDB | 1 2 3 4 5 6 7 8 9 10 11 12 |
One to many | 1 2 3 |
One to one | 1 2 3 |
OOP | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
Oracle Instant Client | 1 |
Oracle Wallet | 1 |
Oracle | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 |
ORM | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 |
OSX | 1 2 3 |
P
Packages | 1 2 |
PBKDF2_HMAC_SHA256 | 1 2 3 4 5 6 7 8 9 |
1 2 3 4 5 6 7 8 | |
Perfect forward secrecy | 1 2 |
Persistence Ignorance | 1 2 3 4 5 6 7 |
PhoneGap | 1 2 3 4 |
PKI | 1 2 3 4 5 |
PODO | 1 2 3 4 5 |
PostgreSQL | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 |
Prepared | 1 2 3 4 5 6 7 |
Primary key | 1 2 3 4 5 6 7 8 9 10 11 |
Proxy | 1 |
Public-key | 1 |
Publish-subscribe | 1 2 |
Publish/subscribe | 1 |
Published method | 1 2 |
Published properties | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
Q
Query Authentication | 1 2 3 4 5 |
Quotes | 1 2 3 4 |
R
Race condition | 1 |
RAD | 1 2 3 |
RawByteString | 1 2 3 4 5 6 7 |
RawJSON | 1 2 |
RawUTF8 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
RDBMS | 1 2 3 4 |
Record | 1 2 3 4 5 |
RecordLoad | 1 |
RecordRef | 1 2 3 4 5 |
RecordSave | 1 |
Redirection | 1 2 3 4 |
Reference-counted | 1 |
REGEXP | 1 |
RegisterClassForJSON | 1 2 3 |
RegisterCustomJSONSerializer | 1 2 |
RegisterCustomJSONSerializerFromText | 1 2 |
RegisterCustomSerializer | 1 2 3 4 5 6 |
Regular expression | 1 2 |
Replication | 1 2 3 4 5 6 7 |
Report | 1 2 |
Repository | 1 2 3 4 5 6 7 8 |
Resourcestring | 1 2 |
Responsive design | 1 |
REST | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 |
RESTful | 1 |
RTREE | 1 2 3 4 |
RTTI | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
S
SAX | 1 2 |
Schema-less | 1 |
Script | 1 |
Seam | 1 2 |
Security | 1 2 3 4 5 6 7 8 9 10 11 |
Serialization | 1 2 3 4 5 6 7 8 9 10 |
Server time stamp | 1 |
Service | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
ServiceContext | 1 2 3 |
Session | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
Sessions | 1 |
SHA256 | 1 2 3 4 5 6 7 8 9 10 11 |
SHA3 | 1 |
Sharding | 1 2 |
Shared nothing architecture | 1 |
Singleton | 1 |
Smart Mobile Studio | 1 2 3 4 5 6 7 8 9 10 11 12 13 |
SmartPascal | 1 2 3 4 5 |
SOA | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 |
SOAP | 1 2 3 4 5 6 |
SOLID | 1 2 3 4 5 6 |
SPI | 1 |
SpiderMonkey | 1 2 |
SPN | 1 2 |
SQL function | 1 2 3 4 5 6 7 8 9 |
SQL | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 |
SQLDATABASE_NOCACHE | 1 |
SQLite3 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 |
SSL | 1 2 3 |
Stand-alone | 1 2 3 4 5 6 7 8 9 |
Stateless | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
Statement | 1 |
Static | 1 2 3 4 5 |
StaticMongoDBRegister | 1 2 |
StaticMongoDBRegisterAll | 1 2 |
Stored procedure | 1 2 3 4 5 6 |
Strong type | 1 2 3 4 |
Stub | 1 2 3 4 5 6 7 |
Synchronize | 1 |
SynDB | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
SynDBExplorer | 1 2 3 4 |
SynDprUses.inc | 1 |
SynFastWideString.pas | 1 2 |
SynLZ | 1 2 3 4 5 6 7 8 |
Synopse.inc | 1 |
SynProject | 1 2 3 4 5 |
SynUnicode | 1 2 3 4 5 |
T
TAutoLocker | 1 2 |
TBSONVariant | 1 2 |
TClientDataSet | 1 2 3 |
TCollection | 1 2 3 4 5 6 7 8 9 10 11 12 |
TCreateTime | 1 2 3 4 5 6 7 8 |
TDataSet | 1 2 3 4 5 6 7 8 9 |
TDateTime | 1 2 3 4 5 6 7 8 9 |
TDateTimeMS | 1 2 3 4 5 6 |
TDD | 1 |
TDecimal128 | 1 |
TDocVariant | 1 2 3 4 5 6 7 8 |
TDynArray | 1 2 3 4 5 |
TDynArrayHashed | 1 2 3 |
Template | 1 2 |
Test-Driven | 1 2 3 |
Test | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
Text Search | 1 |
TGUID | 1 |
Thread-safe | 1 2 3 |
TID | 1 2 3 4 5 6 7 |
Tier | 1 2 3 4 5 6 7 |
Time zone | 1 |
TimeLogToSQL | 1 |
TInjectableObject | 1 |
TInjectableObjectRest | 1 2 |
TInterfacedCollection | 1 2 3 |
TLockedDocVariant | 1 2 |
TModTime | 1 2 3 4 5 6 7 8 9 |
TMonitor | 1 |
TMS | 1 2 |
TNullable | 1 |
TNullableBoolean | 1 |
TNullableCurrency | 1 |
TNullableDateTime | 1 |
TNullableFloat | 1 |
TNullableInteger | 1 |
TNullableTimeLog | 1 |
TNullableUTF8Text | 1 |
TObject | 1 2 3 4 5 |
TObjectList | 1 2 3 4 5 6 7 |
TPersistent | 1 2 3 4 5 6 7 8 9 10 11 12 13 |
TQuery | 1 2 3 4 5 |
Transaction | 1 2 3 4 5 6 7 8 |
TRawUTF8List | 1 2 3 4 |
TRecordReference | 1 2 3 4 5 6 7 |
TRecordReferenceToBeDeleted | 1 2 3 |
TRecordVersion | 1 2 |
TServiceCustomAnswer | 1 2 3 |
TServiceFactory | 1 2 |
TSessionUserID | 1 2 |
TSMEngine | 1 |
TSMEngineManager | 1 |
TSMVariant | 1 2 3 |
TSQLAuthGroup | 1 2 |
TSQLAuthUser | 1 2 3 |
TSQLDataBase | 1 |
TSQLModel | 1 2 3 4 5 6 7 8 9 10 11 12 13 |
TSQLModelRecordProperties | 1 2 |
TSQLRawBlob | 1 2 3 4 5 6 7 8 |
TSQLRecord | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 |
TSQLRecordMany | 1 2 3 4 5 6 7 8 9 |
TSQLRecordMappedAutoID | 1 |
TSQLRecordMappedForcedID | 1 |
TSQLRecordProperties | 1 |
TSQLRecordTableDelete | 1 |
TSQLRecordVirtual | 1 |
TSQLRecordVirtualTableAutoID | 1 2 3 |
TSQLRecordVirtualTableForcedID | 1 |
TSQLRest | 1 2 3 4 5 6 7 8 9 10 |
TSQLRestBatch | 1 |
TSQLRestClient | 1 |
TSQLRestClientDB | 1 2 3 4 |
TSQLRestClientRedirect | 1 2 |
TSQLRestServer | 1 |
TSQLRestServerDB | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
TSQLRestServerFullMemory | 1 2 3 4 5 |
TSQLRestServerRemoteDB | 1 2 |
TSQLRestServerURIContext | 1 |
TSQLRestStorage | 1 |
TSQLRestStorageInMemory | 1 2 3 4 5 6 7 8 9 10 11 |
TSQLTableJSON | 1 2 3 4 |
TSQLVirtualTableBinary | 1 2 3 |
TSQLVirtualTableJSON | 1 2 3 |
TStrings | 1 2 3 4 5 6 7 |
TSynDictionary | 1 2 3 |
TSynLocker | 1 |
TSynSQLStatementDataSet | 1 2 3 |
TSynTimeZone | 1 |
TTimeLog | 1 2 3 4 5 6 7 8 9 10 11 12 |
TTimeLogBits | 1 2 3 |
TTNullableUTF8Text | 1 |
TUnixTime | 1 2 3 4 5 6 7 8 |
U
Ubiquitous Language | 1 2 3 |
Unicode | 1 |
UniDAC | 1 2 3 4 5 6 7 8 9 10 11 12 |
Unit Of Work | 1 2 3 4 5 |
UseHttpApi | 1 |
User-Agent | 1 2 |
UTC | 1 |
UTF-8 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 |
V
Validation | 1 |
Value Objects | 1 2 3 4 5 6 7 8 |
Variant | 1 2 |
Virtual Table | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
VirtualTableExternalMap | 1 2 3 |
VirtualTableExternalRegister | 1 2 3 4 5 |
W
WCF | 1 2 3 |
Weak pointers | 1 |
Web Application | 1 2 3 4 5 6 7 8 9 10 11 12 13 |
WebSockets | 1 2 3 4 5 6 7 8 9 10 11 12 13 |
WideString | 1 2 3 4 5 6 7 8 |
Win64 | 1 |
Windows Authentication | 1 2 3 |
Windows | 1 2 |
X
XML | 1 |
Z
ZDBC | 1 2 3 4 5 6 7 8 9 |
Zeos | 1 2 3 4 5 6 7 8 9 |
Zeroing Weak pointers | 1 |
The whole Software documentation process follows the typical steps of this diagram:
This Software Architecture Design (SAD) document applies to the 1.18 release of the Synopse mORMot Framework library.
After a deep presentation of the framework architecture and main features, each source code unit is detailed, with clear diagrams and tables showing the dependencies between the units, and the class hierarchy of the objects implemented within.
The SynFile main demo is presented on its own, and can be used as a general User Guide of its basic ORM features and User Interface generation - see below.
At the end of this document, Software Requirements Specifications (SWRS) document items are linked directly to the class or function involved with the Software Design Document (SDD) document, from the source code.
GNU GENERAL PUBLIC LICENSE Version 3, 29 June 2007
Copyright (C) 2007 Free Software Foundation, Inc. <http://fsf.org/> Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed.
Preamble
The GNU General Public License is a free, copyleft license for software and other kinds of works.
The licenses for most software and other practical works are designed to take away your freedom to share and change the works. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change all versions of a program--to make sure it remains free software for all its users. We, the Free Software Foundation, use the GNU General Public License for most of our software; it applies also to any other work released this way by its authors. You can apply it to your programs, too.
When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for them if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs, and that you know you can do these things.
To protect your rights, we need to prevent others from denying you these rights or asking you to surrender the rights. Therefore, you have certain responsibilities if you distribute copies of the software, or if you modify it: responsibilities to respect the freedom of others.
For example, if you distribute copies of such a program, whether gratis or for a fee, you must pass on to the recipients the same freedoms that you received. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights.
Developers that use the GNU GPL protect your rights with two steps: (1) assert copyright on the software, and (2) offer you this License giving you legal permission to copy, distribute and/or modify it.
For the developers' and authors' protection, the GPL clearly explains that there is no warranty for this free software. For both users' and authors' sake, the GPL requires that modified versions be marked as changed, so that their problems will not be attributed erroneously to authors of previous versions.
Some devices are designed to deny users access to install or run modified versions of the software inside them, although the manufacturer can do so. This is fundamentally incompatible with the aim of protecting users' freedom to change the software. The systematic pattern of such abuse occurs in the area of products for individuals to use, which is precisely where it is most unacceptable. Therefore, we have designed this version of the GPL to prohibit the practice for those products. If such problems arise substantially in other domains, we stand ready to extend this provision to those domains in future versions of the GPL, as needed to protect the freedom of users.
Finally, every program is threatened constantly by software patents. States should not allow patents to restrict development and use of software on general-purpose computers, but in those that do, we wish to avoid the special danger that patents applied to a free program could make it effectively proprietary. To prevent this, the GPL assures that patents cannot be used to render the program non-free.
The precise terms and conditions for copying, distribution and modification follow.
TERMS AND CONDITIONS
0. Definitions.
"This License" refers to version 3 of the GNU General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this License. Each licensee is addressed as "you". "Licensees" and "recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work in a fashion requiring copyright permission, other than the making of an exact copy. The resulting work is called a "modified version" of the earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based on the Program.
To "propagate" a work means to do anything with it that, without permission, would make you directly or secondarily liable for infringement under applicable copyright law, except executing it on a computer or modifying a private copy. Propagation includes copying, distribution (with or without modification), making available to the public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other parties to make or receive copies. Mere interaction with a user through a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays "Appropriate Legal Notices" to the extent that it includes a convenient and prominently visible feature that (1) displays an appropriate copyright notice, and (2) tells the user that there is no warranty for the work (except to the extent that warranties are provided), that licensees may convey the work under this License, and how to view a copy of this License. If the interface presents a list of user commands or options, such as a menu, a prominent item in the list meets this criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work for making modifications to it. "Object code" means any non-source form of a work.
A "Standard Interface" means an interface that either is an official standard defined by a recognized standards body, or, in the case of interfaces specified for a particular programming language, one that is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other than the work as a whole, that (a) is included in the normal form of packaging a Major Component, but which is not part of that Major Component, and (b) serves only to enable use of the work with that Major Component, or to implement a Standard Interface for which an implementation is available to the public in source code form. A "Major Component", in this context, means a major essential component (kernel, window system, and so on) of the specific operating system (if any) on which the executable work runs, or a compiler used to produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all the source code needed to generate, install, and (for an executable work) run the object code and to modify the work, including scripts to control those activities. However, it does not include the work's System Libraries, or general-purpose tools or generally available free programs which are used unmodified in performing those activities but which are not part of the work. For example, Corresponding Source includes interface definition files associated with source files for the work, and the source code for shared libraries and dynamically linked subprograms that the work is specifically designed to require, such as by intimate data communication or control flow between those subprograms and other parts of the work.
The Corresponding Source need not include anything that users can regenerate automatically from other parts of the Corresponding Source.
The Corresponding Source for a work in source code form is that same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of copyright on the Program, and are irrevocable provided the stated conditions are met. This License explicitly affirms your unlimited permission to run the unmodified Program. The output from running a covered work is covered by this License only if the output, given its content, constitutes a covered work. This License acknowledges your rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not convey, without conditions so long as your license otherwise remains in force. You may convey covered works to others for the sole purpose of having them make modifications exclusively for you, or provide you with facilities for running those works, provided that you comply with the terms of this License in conveying all material for which you do not control copyright. Those thus making or running the covered works for you must do so exclusively on your behalf, under your direction and control, on terms that prohibit them from making any copies of your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under the conditions stated below. Sublicensing is not allowed; section 10 makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological measure under any applicable law fulfilling obligations under article 11 of the WIPO copyright treaty adopted on 20 December 1996, or similar laws prohibiting or restricting circumvention of such measures.
When you convey a covered work, you waive any legal power to forbid circumvention of technological measures to the extent such circumvention is effected by exercising rights under this License with respect to the covered work, and you disclaim any intention to limit operation or modification of the work as a means of enforcing, against the work's users, your or third parties' legal rights to forbid circumvention of technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice; keep intact all notices stating that this License and any non-permissive terms added in accord with section 7 apply to the code; keep intact all notices of the absence of any warranty; and give all recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey, and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to produce it from the Program, in the form of source code under the terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified it, and giving a relevant date.
b) The work must carry prominent notices stating that it is released under this License and any conditions added under section 7. This requirement modifies the requirement in section 4 to "keep intact all notices".
c) You must license the entire work, as a whole, under this License to anyone who comes into possession of a copy. This License will therefore apply, along with any applicable section 7 additional terms, to the whole of the work, and all its parts, regardless of how they are packaged. This License gives no permission to license the work in any other way, but it does not invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display Appropriate Legal Notices; however, if the Program has interactive interfaces that do not display Appropriate Legal Notices, your work need not make them do so.
A compilation of a covered work with other separate and independent works, which are not by their nature extensions of the covered work, and which are not combined with it such as to form a larger program, in or on a volume of a storage or distribution medium, is called an "aggregate" if the compilation and its resulting copyright are not used to limit the access or legal rights of the compilation's users beyond what the individual works permit. Inclusion of a covered work in an aggregate does not cause this License to apply to the other parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms of sections 4 and 5, provided that you also convey the machine-readable Corresponding Source under the terms of this License, in one of these ways:
a) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by the Corresponding Source fixed on a durable physical medium customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by a written offer, valid for at least three years and valid for as long as you offer spare parts or customer support for that product model, to give anyone who possesses the object code either (1) a copy of the Corresponding Source for all the software in the product that is covered by this License, on a durable physical medium customarily used for software interchange, for a price no more than your reasonable cost of physically performing this conveying of source, or (2) access to copy the Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the written offer to provide the Corresponding Source. This alternative is allowed only occasionally and noncommercially, and only if you received the object code with such an offer, in accord with subsection 6b.
d) Convey the object code by offering access from a designated place (gratis or for a charge), and offer equivalent access to the Corresponding Source in the same way through the same place at no further charge. You need not require recipients to copy the Corresponding Source along with the object code. If the place to copy the object code is a network server, the Corresponding Source may be on a different server (operated by you or a third party) that supports equivalent copying facilities, provided you maintain clear directions next to the object code saying where to find the Corresponding Source. Regardless of what server hosts the Corresponding Source, you remain obligated to ensure that it is available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided you inform other peers where the object code and Corresponding Source of the work are being offered to the general public at no charge under subsection 6d.
A separable portion of the object code, whose source code is excluded from the Corresponding Source as a System Library, need not be included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any tangible personal property which is normally used for personal, family, or household purposes, or (2) anything designed or sold for incorporation into a dwelling. In determining whether a product is a consumer product, doubtful cases shall be resolved in favor of coverage. For a particular product received by a particular user, "normally used" refers to a typical or common use of that class of product, regardless of the status of the particular user or of the way in which the particular user actually uses, or expects or is expected to use, the product. A product is a consumer product regardless of whether the product has substantial commercial, industrial or non-consumer uses, unless such uses represent the only significant mode of use of the product.
"Installation Information" for a User Product means any methods, procedures, authorization keys, or other information required to install and execute modified versions of a covered work in that User Product from a modified version of its Corresponding Source. The information must suffice to ensure that the continued functioning of the modified object code is in no case prevented or interfered with solely because modification has been made.
If you convey an object code work under this section in, or with, or specifically for use in, a User Product, and the conveying occurs as part of a transaction in which the right of possession and use of the User Product is transferred to the recipient in perpetuity or for a fixed term (regardless of how the transaction is characterized), the Corresponding Source conveyed under this section must be accompanied by the Installation Information. But this requirement does not apply if neither you nor any third party retains the ability to install modified object code on the User Product (for example, the work has been installed in ROM).
The requirement to provide Installation Information does not include a requirement to continue to provide support service, warranty, or updates for a work that has been modified or installed by the recipient, or for the User Product in which it has been modified or installed. Access to a network may be denied when the modification itself materially and adversely affects the operation of the network or violates the rules and protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided, in accord with this section must be in a format that is publicly documented (and with an implementation available to the public in source code form), and must require no special password or key for unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this License by making exceptions from one or more of its conditions. Additional permissions that are applicable to the entire Program shall be treated as though they were included in this License, to the extent that they are valid under applicable law. If additional permissions apply only to part of the Program, that part may be used separately under those permissions, but the entire Program remains governed by this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option remove any additional permissions from that copy, or from any part of it. (Additional permissions may be written to require their own removal in certain cases when you modify the work.) You may place additional permissions on material, added by you to a covered work, for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you add to a covered work, you may (if authorized by the copyright holders of that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or author attributions in that material or in the Appropriate Legal Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or requiring that modified versions of such material be marked in reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or authors of the material; or
e) Declining to grant rights under trademark law for use of some trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that material by anyone who conveys the material (or modified versions of it) with contractual assumptions of liability to the recipient, for any liability that these contractual assumptions directly impose on those licensors and authors.
All other non-permissive additional terms are considered "further restrictions" within the meaning of section 10. If the Program as you received it, or any part of it, contains a notice stating that it is governed by this License along with a term that is a further restriction, you may remove that term. If a license document contains a further restriction but permits relicensing or conveying under this License, you may add to a covered work material governed by the terms of that license document, provided that the further restriction does not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you must place, in the relevant source files, a statement of the additional terms that apply to those files, or a notice indicating where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the form of a separately written license, or stated as exceptions; the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly provided under this License. Any attempt otherwise to propagate or modify it is void, and will automatically terminate your rights under this License (including any patent licenses granted under the third paragraph of section 11).
However, if you cease all violation of this License, then your license from a particular copyright holder is reinstated (a) provisionally, unless and until the copyright holder explicitly and finally terminates your license, and (b) permanently, if the copyright holder fails to notify you of the violation by some reasonable means prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is reinstated permanently if the copyright holder notifies you of the violation by some reasonable means, this is the first time you have received notice of violation of this License (for any work) from that copyright holder, and you cure the violation prior to 30 days after your receipt of the notice.
Termination of your rights under this section does not terminate the licenses of parties who have received copies or rights from you under this License. If your rights have been terminated and not permanently reinstated, you do not qualify to receive new licenses for the same material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or run a copy of the Program. Ancillary propagation of a covered work occurring solely as a consequence of using peer-to-peer transmission to receive a copy likewise does not require acceptance. However, nothing other than this License grants you permission to propagate or modify any covered work. These actions infringe copyright if you do not accept this License. Therefore, by modifying or propagating a covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically receives a license from the original licensors, to run, modify and propagate that work, subject to this License. You are not responsible for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an organization, or substantially all assets of one, or subdividing an organization, or merging organizations. If propagation of a covered work results from an entity transaction, each party to that transaction who receives a copy of the work also receives whatever licenses to the work the party's predecessor in interest had or could give under the previous paragraph, plus a right to possession of the Corresponding Source of the work from the predecessor in interest, if the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the rights granted or affirmed under this License. For example, you may not impose a license fee, royalty, or other charge for exercise of rights granted under this License, and you may not initiate litigation (including a cross-claim or counterclaim in a lawsuit) alleging that any patent claim is infringed by making, using, selling, offering for sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this License of the Program or a work on which the Program is based. The work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims owned or controlled by the contributor, whether already acquired or hereafter acquired, that would be infringed by some manner, permitted by this License, of making, using, or selling its contributor version, but do not include claims that would be infringed only as a consequence of further modification of the contributor version. For purposes of this definition, "control" includes the right to grant patent sublicenses in a manner consistent with the requirements of this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free patent license under the contributor's essential patent claims, to make, use, sell, offer for sale, import and otherwise run, modify and propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express agreement or commitment, however denominated, not to enforce a patent (such as an express permission to practice a patent or covenant not to sue for patent infringement). To "grant" such a patent license to a party means to make such an agreement or commitment not to enforce a patent against the party.
If you convey a covered work, knowingly relying on a patent license, and the Corresponding Source of the work is not available for anyone to copy, free of charge and under the terms of this License, through a publicly available network server or other readily accessible means, then you must either (1) cause the Corresponding Source to be so available, or (2) arrange to deprive yourself of the benefit of the patent license for this particular work, or (3) arrange, in a manner consistent with the requirements of this License, to extend the patent license to downstream recipients. "Knowingly relying" means you have actual knowledge that, but for the patent license, your conveying the covered work in a country, or your recipient's use of the covered work in a country, would infringe one or more identifiable patents in that country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or arrangement, you convey, or propagate by procuring conveyance of, a covered work, and grant a patent license to some of the parties receiving the covered work authorizing them to use, propagate, modify or convey a specific copy of the covered work, then the patent license you grant is automatically extended to all recipients of the covered work and works based on it.
A patent license is "discriminatory" if it does not include within the scope of its coverage, prohibits the exercise of, or is conditioned on the non-exercise of one or more of the rights that are specifically granted under this License. You may not convey a covered work if you are a party to an arrangement with a third party that is in the business of distributing software, under which you make payment to the third party based on the extent of your activity of conveying the work, and under which the third party grants, to any of the parties who would receive the covered work from you, a discriminatory patent license (a) in connection with copies of the covered work conveyed by you (or copies made from those copies), or (b) primarily for and in connection with specific products or compilations that contain the covered work, unless you entered into that arrangement, or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting any implied license or other defenses to infringement that may otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot convey a covered work so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not convey it at all. For example, if you agree to terms that obligate you to collect a royalty for further conveying from those to whom you convey the Program, the only way you could satisfy both those terms and this License would be to refrain entirely from conveying the Program.
13. Use with the GNU Affero General Public License.
Notwithstanding any other provision of this License, you have permission to link or combine any covered work with a work licensed under version 3 of the GNU Affero General Public License into a single combined work, and to convey the resulting work. The terms of this License will continue to apply to the part which is the covered work, but the special requirements of the GNU Affero General Public License, section 13, concerning interaction through a network will apply to the combination as such.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of the GNU General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns.
Each version is given a distinguishing version number. If the Program specifies that a certain numbered version of the GNU General Public License "or any later version" applies to it, you have the option of following the terms and conditions either of that numbered version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of the GNU General Public License, you may choose any version ever published by the Free Software Foundation.
If the Program specifies that a proxy can decide which future versions of the GNU General Public License can be used, that proxy's public statement of acceptance of a version permanently authorizes you to choose that version for the Program.
Later license versions may give you additional or different permissions. However, no additional obligations are imposed on any author or copyright holder as a result of your choosing to follow a later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided above cannot be given local legal effect according to their terms, reviewing courts shall apply local law that most closely approximates an absolute waiver of all civil liability in connection with the Program, unless a warranty or assumption of liability accompanies a copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively state the exclusion of warranty; and each file should have at least the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.> Copyright (C) <year> <name of author>
This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License along with this program. If not, see <http://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If the program does terminal interaction, make it output a short notice like this when it starts in an interactive mode:
<program> Copyright (C) <year> <name of author> This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'. This is free software, and you are welcome to redistribute it under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate parts of the General Public License. Of course, your program's commands might be different; for a GUI interface, you would use an "about box".
You should also get your employer (if you work as a programmer) or school, if any, to sign a "copyright disclaimer" for the program, if necessary. For more information on this, and how to apply and follow the GNU GPL, see <http://www.gnu.org/licenses/>.
The GNU General Public License does not permit incorporating your program into proprietary programs. If your program is a subroutine library, you may consider it more useful to permit linking proprietary applications with the library. If this is what you want to do, use the GNU Lesser General Public License instead of this License. But first, please read <http://www.gnu.org/philosophy/why-not-lgpl.html>.
The main features of mORMot are therefore:
With local or remote access, via an auto-configuring Client-Server REST design.
mORMot offers all features needed for building any kind of modern software project, with state-of-the-art integrated software components, designed for both completeness and complementarity, offering convention over configuration solutions, and implemented for speed and efficiency.For storing some data, you define a class
, and the framework will take care of everything: routing, JSON marshalling, table creation, SQL generation, validation.
For creating a service, you define an interface
and a class
, and you are done. Of course, the same ORM/ODM or SOA methods will run on both server and client sides: code once, use everywhere!
For building a MVC web site, write a Controller class in Delphi, then some HTML Views using Mustache templates, leveraging the same ORM/ODM or SOA methods as Model.
If you need a HTTP server, a proxy redirection, master/slave replication, publish-subscribe, a test, a mock, add security, define users or manage rights, a script engine, a report, User Interface, switch to XML format or publish HTML dynamic pages - just pick up the right class
or method. If you need a tool or feature, it is probably already there, waiting for you to use it.
The table content of this document makes it clear: this is no ordinary piece of software.
The mORMot framework provides an Open Source self-sufficient set of units (even Delphi starter edition is enough) for creating any Multi-tier application, up to the most complex Domain-Driven design - see below:
interface
-based services (like WCF) and Client-Server ORM - following a RESTful model using JSON over several communication protocols (e.g. HTTP/1.1 and HTTPS);variant
storage with late-binding for dynamic documents - your business logic may also be completed in JavaScript on the server side as stated below;DB.pas
provider (e.g. NexusDB, DBExpress, FireDAC, AnyDAC, UniDAC...), with a powerful SQLite3 kernel, and direct SQL access if needed - including SQL auto-generation for SQLite3, Oracle, Jet/MSAccess, MS SQL, Firebird, DB2, PostgreSQL, MySQL, Informix and NexusDB - the ORM is also able to use NoSQL engines via a native MongoDB connection, for ODM persistence;If you do not know some of those concepts, don't worry: this document will detail them - see below.
With mORMot, ORM is not used only for data persistence of objects in databases (like in other implementations), but as part of a global n-Tier, Service Oriented Architecture (SOA), ready to implement Domain-Driven solutions.
mORMot is not another ORM on which a transmission layer has been added, like almost everything existing in Delphi, C# or Java: this is a full Client-Server ORM/SOA from the ground up. This really makes the difference.
The business logic of your applications will be easily exposed as Services, and will be accessible from light clients (written in Delphi or any other mean, including AJAX).
The framework Core is non-visual: it provides only a set of classes to be used from code. But you have also some UI units available (including screen auto-creation, reporting and ribbon GUI), and you can use it from any RAD, web, or AJAX clients.
No dependency is needed at the client side (no DB driver, or third-party runtime): it is able to connect via standard HTTP or HTTPS, even through a corporate proxy or a VPN. Rich Delphi clients can be deployed just by copying and running a stand-alone small executable, with no installation process. Client authentication is performed via several secure methods, and communication can be encrypted via HTTS or with a proprietary SHA/AES-256 algorithm. SOA endpoints are configured automatically for each published interface on both server and client sides, and creating a load-balancing proxy is a matter of one method call. Changing from one database engine to another is just a matter of one line of code; full audit-trail history is available, if needed, to track all changes of any class persisted by the ORM/ODM.
Cross-platform clients can be easily created, as Win32 and Win64 executables of course, but also for any platform supported by the Delphi compiler (including Mac OSX, iPhone/iPad and Android), or by FreePascal / Lazarus. AJAX applications can easily be created via Smart Mobile Studio, as will any mobile operating system be accessible as an HTML5 web rich client or stand-alone PhoneGap application, ready to be added to the Windows, Apple or Google store. See below for how mORMot client code generation leverages all platforms.
Speed and scalability has been implemented from the ground up - see below: a genuine optimized multi-threaded core let a single server handle more than 50,000 concurrent clients, faster than DataSnap, WCF
or node.js
, and our rich SOA design is able to implement both vertical and horizontal scalable hosting, using recognized enterprise-level SQL or NoSQL databases for storage.
In short, with mORMot, your ROI is maximized.
The Synopse mORMot framework implements a Client-Server RESTful architecture, trying to follow some MVC, N-Tier, ORM, SOA best-practice patterns - see below.
Several clients, can access to the same remote or local server, using diverse communication protocols:
Or the application can be stand-alone: Switch from this embedded architecture to the Client-Server one is just a matter of how mORMot classes are initialized. For instance, the very same executable can even be running as a stand-alone application, a server, or a client, depending on some run-time parameters!
At first, some points can be highlighted, which make this framework distinct to other available solutions:
TStrings, TCollection, TPersistent
or TObject
(via registration of a custom serializer) instance, or even a dynamic array, or any record content, with integrated JSON serialization, via an interface-based contract shared on both client and server sides;SynDB.pas
classes are self-sufficient, and do not depend on the Delphi DB.pas
unit nor any third-party (so even the Delphi Starter edition is enough) - but the SynDBDataset
unit is also available to access any DB.pas
based solution (e.g. NexusDB, DBExpress, FireDAC, AnyDAC, UniDAC or even the BDE...);As you can see from the previous section, mORMot provides a comprehensive set of features that can help you to manage your crosscutting concerns though a reusable set of components and core functionality.
Of course, like many developers, you may suffer from the well-known NIH ("Not Invented Here") syndrome. On the other side, it is a commonly accepted fact that the use of standard and proven code libraries and components can save development time, minimize costs, reduce the use of precious test resources, and decrease the overall maintenance effort.Benefits of mORMot are therefore:
Even if mORMot will be more easily used in a project designed from scratch, it fits very well the purpose of evolving any existing Delphi project, or creating the server side part of an AJAX application.
One benefit of such a framework is to facilitate the transition from a traditional Client-Server architecture to a N-Tier layered pattern.
Due to its modular design, you can integrate some framework bricks to your existing application:
record
or dynamic array wrappers - see below, or our dynamic document storage via variant
- see below, including JSON or binary persistence;TQuery
emulation class - see below - to replace some BDE queries, or introduce nice unique features like direct database access or array binding for very fast data insertion - see below, or switch to a NoSQL database - see below;mORMotReport.pas
code-based system, which is very easy to use even on the server side (serving PDF files), when your business logic heavily relies on objects, not direct DB - see below;mORMot implements the needed techniques for introducing what Michael Feathers calls, in his book Working Effectively With Legacy Code, a seam. A seam is an area where you can start to cleave off some legacy code and begin to introduce changes. Even mocking abilities of mORMot - see below - will help you in this delicate task - see http://www.infoq.com/articles/Utilizing-Logging
Do not forget that Synopse, as a company, is able to offer dedicated audit and support for such a migration. The sooner, the better.
Before you start going any further, we propose here below a simple FAQ containing the most frequent questions we received on our forums.
First of all, take a look at the keyword index available at the very beginning of this document. The underlined entries target the main article(s) about a given concept or technical term.
Feel free to give your feedback at http://synopse.info/forum asking new questions or improving answers!
Your SAD doc is too long to read through in a short period.
Too much documentation can kill the documentation! But you do not need to read the whole document: most of it is a detailed description of every unit, object, or class. But the first part is worth reading, otherwise you are very likely to miss some main concepts or patterns. It just takes 15-30 minutes! Consider also the slides available at https://drive.google.com/folderview?id=0B0r8u-FwvxWdeVJVZnBhSEpKYkE
Where should I start?
Take a look at the Architecture principles below, then download and install the sources and compile and run the TestSQL3.dpr
program as stated below. Check about ORM below, SOA below and MVC below, then test the various samples (from the SQLite3\Samples
folder), especially 01, 02, 04, 11, 12, 14, 17, 26, 28, 30 and the MainDemo
.
So far, I can see your mORMot fits most of the requirement, but seems only for Database Client-Server apps.
First of all, the framework is a set of bricks, so you can use it e.g. to build interface based services, even with no database at all. We tried to make its main features modular and uncoupled.
I am not a great fan of ORM, sorry, I still like SQL and have some experience of that. Some times sophisticated SQL query is hard to change to ORM code.
ORM can make development much easier; but you can use e.g. interface-based services and "manual" SQL statements - in this case, you have at hand below classes in mORMot, which allow very high performance and direct export to JSON.
I am tempted by using an ORM, but mORMot forces you to inherit from a root TSQLRecord
type, whereas I'd like to use any kind of object.
We will discuss this in details below. Adding attributes to an existing class is tempting, but will pollute your code at the end, mixing persistence and business logic: see Persistence Ignorance and Aggregates below. The framework proposes a second level of Object mapping, allowing to persist any kind of PODO (Plain Old Delphi Object), by defining CQRS services - see below.
Why are you not using the latest features of the compiler, like generics or class attributes?
Our ORM does not rely on generics, but on the power of the object pascal type system: specifying a class
or interface
type as parameter is safe and efficient - and generics tends to blow the executable size, lower down performance (the current RTL is not very optimized, and sometimes bugged), and hide implementation details. Some methods are available for newer version of the compiler, introducing access via generics; but it was not mandatory to depend on them. We also identified, as several Java or C# gurus, that class
attributes may sound like a good idea, but tend to pollute the code, and introduce unexpected coupling. Last but not least, those features are incompatible with older version of Delphi we would like to support, and may reduce compatibility with FPC.
I also notice in your SAD doc, data types are different from Delphi. You have RawUTF8
, etc, which make me puzzled, what are they?
You can for sure use standard Delphi string
types, but some more optimized types were defined: since the whole framework is UTF-8 based, we defined a dedicated type, which works with all versions of Delphi, before and after Delphi 2009. By the way, just search for RawUTF8
in the keyword index of this document, or see below.
During my tests, my client receives non standard JSON with unquoted fields.
Internally, the framework uses JSON in MongoDB extended syntax, i.e. fields are not quoted - this gives better performance and reduces memory and bandwidth with a mORMot client. To receive "field":value
instead of field:value
, just add a proper User-Agent
HTTP header to the client request (as any browser does), and the server will emit standard JSON.
When I work with floating points and JSON, sometimes numerical values with more than 4 decimals are converted into JSON strings.
By default, double
values are disabled in the JSON serialization, to avoid any hidden precision lost during conversion: see below how to enable it.
I got an access violation with SynDB ISQLDBRows.
You need to explicitly release the ISQLDBRows
instance, by setting it to nil
, before freeing the owner's connection - see below.
Deadlock occurs with interface callbacks.
When working with asynchronous notifications over WebSockets, you need to ensure you won't fire directly a callback from a main method execution - see below for several solutions.
All the objects seem non-VCL components, meaning need code each property and remember them all well.
This is indeed... a feature. The framework is not RAD, but fully object-oriented. Thanks to the Delphi IDE, you can access all properties description via auto-completion and/or code navigation. We tried to make the documentation exhaustive and accurate. Then you can still use RAD for UI design, but let business be abstracted in pure code. See e.g. the mORMotVCL.pas
unit which can publish any ORM result as TDataSource
for your UI.
I know you have joined the DataSnap performance discussion and your performance won good reputation there. If I want to use your framework to replace my old project of DataSnap, how easy will it be?
If you used DataSnap to build method-based services, translation into mORMot will be just a matter of code refactoring. And you will benefit of new features like Interface-based services - see below - which is much more advanced than the method-based pattern, and will avoid generating the client class via a wizard, and offers additional features - see below or below.
What is the SMS? Do you know any advantage compared to JQuery or AngularJS?
Smart Mobile Studio is an IDE and some source runtime able to develop and compile an Object-Pascal project into a HTML 5 / CSS 3 / JavaScript embedded application, i.e. able to work stand alone with no remote server. When used with mORMot on the server side, you can use the very same object pascal language on both server and client sides, with strong typing and true OOP design. Then you feature secure authentication and JSON communication, with connected or off-line mode. Your SmartPascal client code can be generated by your mORMot server, as stated below.
I am trying to search a substitute solution to WebSnap. Do you have any sample or doc to describe how to build a robust web Server?
You can indeed easily create a modern MVC / MVVM scaling Web Application. Your mORMot server can easily publish its ORM / SOA business logic as Model, use Mustache logic-less templates rendering - see below - for Views, and defining the ViewModel / Controller as regular Delphi methods. See below for more details, and discovering a sample "blog" application.
Have you considered using a popular source coding host like Github or BitBucket?
We love to host our own source code repository, and find fossil a perfect match for our needs, with a friendly approach. But we created a parallel repository on GitHub, so that you may be able to monitor or fork our projects - see http://github.com/synopse/mORMot
Note that you can get a daily snapshot of our official source code repository directly from
http://synopse.info/files/mORMotNightlyBuild.zip
Why is this framework named mORMot?
- Because its initial identifier was "Synopse SQLite3 database framework", which may induce a SQLite3-only library, whereas the framework is now able to connect to any database engine;
- Because we like mountains, and those large ground rodents;
- Because marmots do hibernate, just like our precious objects;
- Because marmots are highly social and use loud whistles to communicate with one another, just like our applications are designed not to be isolated;
- Because even if they eat greens, they use to fight at Spring for their realm;
- Because it may be an acronym for "Manage Object Relational Mapping Over Territory", or whatever you may think of...
All those points render possible any project implementation, up to complex Domain-Driven design - see below.
A general design of the mORMot architecture is shown in the following diagram:
In addition, you may use the following transversal features: Don't be afraid. Such a drawing may sound huge and confusing, especially when you have a RAD background, and did not work much with modern design patterns.Following pages will detail and explain how the framework implements this architecture, and sample code is available to help you discovering the amazing mORMot realm.
In the previous diagram, you can already identify some key concepts of mORMot:
First point is to state that you can't talk about architecture in isolation. Architecture is always driven by the actual needs of the application, not by whatever the architect read about last night and is dying to see how it works in the real world. There is no such "one architecture fits all" nor "one framework fits all" solution. Architecture is just a thinking of how you are building your own software.
In fact, software architecture is not about theory and diagrams, nor just about best practice, but about a way of implementing a working solution for your customers.
This diagram presents how Architecture is part of a typical SCRUM iterative agile process. Even if some people of your company may be in charge of global software architecture, or even if your project managements follows a classic V-cycle and does not follow the agile manifesto, architecture should never be seen as a set of rules, to be applied by every and each developers. Architecture is part of the coding, but not all the coding.Here are some ways of achieving weak design:
Therefore, some advices:
Purpose of frameworks like mORMot is to provide your teams with working and integrated set of classes, so that you can focus on your product, enjoying the collaboration with other Open Source users, in order to use evolving and pertinent software architecture.
The Model-View-Controller (MVC) is a software architecture, currently considered an architectural pattern used in software engineering. The pattern isolates "domain logic" (the application logic for the user) from the user interface (input and presentation), permitting independent development, testing and maintenance of each (separation of concerns).
The Model manages the behavior and data of the application domain, responds to requests for information about its state (usually from the view), and responds to instructions to change state (usually from the controller). In Event-Driven systems, the model notifies observers (usually views) when the information changes so that they can react - but since our ORM is stateless, it does not need to handle those events - see below.The View renders the model into a form suitable for interaction, typically a user interface element. Multiple views can exist for a single model for different purposes. A viewport typically has a one to one correspondence with a display surface and knows how to render to it.
The Controller receives user input and initiates a response by making calls on model objects. A controller accepts input from the user and instructs the model and viewport to perform actions based on that input.
In the framework, the model is not necessarily merely a database; the model in MVC is both the data and the business/domain logic needed to manipulate the data in the application. In our ORM, a model is implemented via aTSQLModel
class, which centralizes all TSQLRecord
inherited classes used by an application, both database-related and business-logic related.
The views can be implemented using:
The controller is mainly already implemented in our framework, within the RESTful commands, and will interact with both the associated view (e.g. for refreshing the User Interface) and model (for data handling). Some custom actions, related to the business logic, can be implemented via some custom TSQLRecord
classes or via custom RESTful Services - see below.
In software engineering, multi-tier architecture (often referred to as n-tier architecture) is a client–server architecture in which the presentation, the application processing, and the data management are logically separate processes. For example, an application that uses middle-ware to service data requests between a user and a database employs multi-tier architecture. The most widespread use of multi-tier architecture is the three-tier architecture.
In practice, a typical VCL/FMX RAD application written in Delphi has a two-tier architecture:
In this approach, the Application Tier mixes the UI and the logic in forms and modules.Both ORM and SOA aspects of our RESTful framework make it easy to develop using a more versatile three-tier architecture.
The Synopse mORMot Framework follows this development pattern:interface
, up to a Domain-Driven design - see below - if your project reaches some level of complexity;In fact, mORMot can scales up to a Domain-Driven Design four-tier architecture - see below - as such:
Note that you have to make a difference between physical and logical n-tier architecture. Most of the time, n-Tier is intended to be a physical (hardware) view, for instance a separation between the database server and the application server, placing the database on a separate machine to facilitate ease of maintenance. In mORMot, and more generally in SOA - see below, we deal with logical layout, with separation of layers through interfaces - see below - and the underlying hardware implementation will usually not match the logical layout.
In this document, we will focus on the logical way of thinking / coding, letting the physical deployment be made according to end-user expectations.
Service-Oriented Architecture (SOA) is a flexible set of design principles used during the phases of systems development and integration in computing. A system based on a SOA will package functionality as a suite of inter-operable services that can be used within multiple, separate systems from several business domains.
A software service is a logicical representation of a repeatable activity that produce a precise result. In short, a consumer ask to a producer to act in order to produce a result. Most of the time, this invocation is free from any previous invocation (it is therefore called stateless).
The SOA implementations rely on a mesh of software services. Services comprise unassociated, loosely coupled units of functionality that have no calls to each other embedded in them. Each service implements one action, such as filling out an online application for an account, or viewing an online bank statement, or placing an online booking or airline ticket order. Rather than services embedding calls to each other in their source code, they use defined protocols that describe how services pass and parse messages using description meta-data.
Since most of those services are by definition stateless, some kind of service composition is commonly defined to provide some kind of logical multi-tier orchestration of services. A higher level service invokes several services to work as a self-contained, stateless service; as a result, lower-level services can still be stateless, but the consumer of the higher level service is able to safely process some kind of transactional process. For more details about SOA, see http://en.wikipedia.org/wiki/Service-oriented_architectureSOA is mainly about decoupling.
That is, it enables implementation independence in a variety of ways, for instance:
Dependency | Desired decoupling | Decoupling technique |
Platform | Hardware, Framework or Operating System should not constrain choices of the Services consumers | Standard protocols, mainly Web services (e.g. SOAP or RESTful/JSON) |
Location | Consumers may not be impacted by service hosting changes | Routing and proxies will maintain Services access |
Availability | Maintenance tasks shall be transparent | Remote access allows centralized support on Server side |
Versions | New services shall be introduced without requiring upgrades of clients | Contract marshalling can be implemented on the Server side |
SOA and ORM - see below - do not exclude themselves. In fact, even if some software architects tend to use only one of the two features, both can coexist and furthermore complete each other, in any Client-Server application:
In particular, SOA will help leaving the business logic on the Server side, therefore will help increasing the Multi-tier architecture. By reducing the back-and-forth between the Client and the Server, it will also reduce the network bandwidth, and the Server resources (it will always cost less to run the service on the Server than run the service on the Client, adding all remote connection and serialization to the needed database access). Our interface-based SOA model allows the same code to run on both the client and the server side, with a much better performance on the server side, but a full interoperability of both sides.
In practice, ORM gives a set of methods to ease high-level objects persistence into a RDBMS.
Our Delphi class
instances are not directly usable with a relational database, which is since decades the most convenient way of persisting data. So some kind of "glue" is needed to let class properties be saved into one or several tables. You can interact with the database using its native language, aka SQL. But SQL by itself is a full programming language, with diverse flavors depending on the exact backend engine (just think about how you define a column type able to store text). So writing and maintaining your SQL statements may become a time-consuming, difficult and error-prone task.
Sometimes, there will be nothing better than a tuned SQL statement, able to aggregate and join information from several tables. But most of the time, you will need just to perform some basic operations, known as CRUD (for Create Retrieve Update Delete actions) on well identified objects: this is where ORM may give you a huge hint, since it is able to generate the SQL statements for you.
The ORM works in fact as such:
The ORM core retrieve information to perform the mapping:class
type (via RTTI);Since several implementation schemes are possible, we will first discuss the pros and the cons of each one.
First, here is a diagram presenting some common implementation schemes of database access with Delphi (which maps most other languages or frameworks, including C# or Java).
The table below is a very suggestive (but it doesn't mean wrong) Resumé of some common schemes, in the Delphi world. ORM is just one nice possibility among others.Of course, you'll find out that our framework implements a Client-Server ORM, which can be down-sized to stand-alone mode if needed, but which is, thanks to its unique implementation, scalable to any complex Domain-Driven Design.
As far as we found out, looking at every language and technology around, almost no other ORM supports such a native Client-Server orientation. Usual practice is to use a Service-Oriented Architecture (SOA) for remote access to the ORM. Some projects allow remote access to an existing ORM, but they are separated projects. Our mORMot is pretty unique, in respect to its RESTful Client-Server orientation, from the ground up.
If you entered the Delphi world years ago, you may be pretty fluent with the RAD approach. But you probably also discovered how difficult it is to maintain an application which mixes UI components, business logic and database queries. Today's software users have some huge ergonomic expectations about software usability: some screens with grids and buttons, mapping the database, won't definitively be appealing. Using mORMot's ORM /SOA approach will help you focus on your business and your clients expectations, letting the framework perform most of the plumbing for you.
SQL is the De-Facto standard for data manipulation
NoSQL is a new paradigm, named as such in early 2009 (even if some database engines, like Lotus Domino, may fit the definition since decades):
We can identify two main families of NoSQL databases:
Graph-oriented databases store data by their relations / associations:
Such kind of databases are very useful e.g. for developing any "social" software, which will value its data by the relations between every node. Such data model does not fit well with the relational model, whereas a NoSQL engine like Neo4j handles such kind of data natively. Note that by design, Graph-oriented databases are ACID.But the main NoSQL database family is populated by the Aggregate-oriented databases. By Aggregate, we mean the same definition as will be used below for Domain Driven Design. It is a collection of data that we interact with as a unit, which forms the boundaries for ACID operations in a given model.
In fact, Aggregate-oriented databases can be specified as three main implementation/query patterns:
Some of them can be schema-less (meaning that the data layout is not fixed, and can evolve on the fly without re-indexing the whole database) - but column-driven bases do have a schema, or even storing plain BLOB of data (this is the purpose of Key/Value engines, which focus on storage speed and rely on the client side to process the data).
In short, RDBMS stores data per table, and need to JOIN the references to get the aggregated information:
Whereas NoSQL stores its aggregates as documents: the whole data is embedded in one. Which may be represented as the following JSON - see below - data:{ "ID": 1234, "UserName": "John Smith", "Contact": { "Phone": "123-456-789", "Email": "xyz@abc.com" }, "Access": { "Level": 5, "Group": "dev" } }
Such a document will fit directly with the object programming model, without the need of thinking about JOINed queries and database plumbing.
As a result, we can discuss the two data models:
The Relational model features normalization of data, i.e. organize the fields and tables of a relational database to minimize redundancy.
On the other hand, the Document model features denormalization of data, to optimize the read performance of a database by adding redundant data or by grouping data. It also features horizontal scaling of the servers, since data can easily be balanced among several servers, without the speed penalty of performing a remote JOIN.
One of the main difficulties, when working with NoSQL, is to define how to denormalize the data, and when to store the data in normalized format.
One good habit is to model your data depending on the most current queries you will have to perform. For instance, you may embed sub-documents which will be very likely to be requested by your application most of the time. Note that most NoSQL engines feature a projection mechanism, which allows you to return only the needed fields for a query, leaving the sub-documents on the server if you do not need them at this time. The less frequent queries may be executed over separated collections, populated e.g. with consolidated information.
Since NoSQL databases have fewer hard-and-fast rules than their relational databases ancestors, you are more likely to tune your model, depending on your expectations. In practice, you may spend less time thinking about "how" to store the data than with a RDBMS, and are still able to normalize information later, if needed. NoSQL engines do not fear redundant information, as soon as you follow the rules of letting the client application take care of the whole data consistency (e.g. via one ORM).
As you may have stated, this Document data Model is much closer to the OOP paradigm than the classic relational scheme. Even a new family of frameworks did appear together with NoSQL adoption, named Object Document Mapping (ODM), which is what Object-Relational Mapping (ORM) was for RDBMS.
In short, both approaches have benefits, which are to be weighted.
With mORMot, you can switch from a classic SQL engine into a trendy MongoDB server, just in one line of code, when initializing the data on the server side. You can switch from ORM to ODM at any time, even at runtime, e.g. for a demanding customer.
http://domaindrivendesign.org gives the somewhat "official" definition of Domain-Driven design (DDD):
Over the last decade or two, a philosophy has developed as an undercurrent in the object community. The premise of domain-driven design is two-fold:
Domain-driven design is not a technology or a methodology. It is a way of thinking and a set of priorities, aimed at accelerating software projects that have to deal with complicated domains.
Of course, this particular architecture is customizable according to the needs of each project. We simply propose following an architecture that serves as a baseline to be modified or adapted by architects according to their needs and requirements.
In respect to other kinds of Multi-tier architecture, DDD introduces some restrictive patterns, for a cleaner design:
The following diagram is a map of the patterns presented and the relationships between them.
It is inspired from the one included in the Eric Evans's reference book, "Domain-Driven Design", Addison-Wesley, 2004 (and updated to take in account some points appeared since).
Domain-Driven design is not to be used everywhere, and in every situation.
First of all, the following are prerequisite of using DDD:
Then check that DDD is worth it, i.e. if:
Perhaps DDD sounds more appealing to you now. In this case, our mORMot framework will provide all the bricks you need to implement it, focusing on your domain and letting the libraries do all the needed plumbing.
If you identified that DDD is not to be used now, you will always find with mORMot the tools you need, ready to switch to DDD when it will be necessary.
Legacy code and existing projects will benefit from DDD patterns. Finding so-called seams, along with isolating your core domain, can be extremely valuable when using DDD techniques to refactor and tighten the highest value parts of your code. It is not mandatory to re-write your whole existing software with DDD patterns everywhere: once you have identified where your business strategy's core is, you can introduce DDD progressively in this area. Then, following continuous feedback, you will refine your code, adding regression tests, and isolating your domain code from end-user code.
For a technical introduction about DDD and how mORMot can help you implement this design, see below.
With mORMot, your software solution will never be stuck in a dead-end. You'll be able to always adapt to your customers need, and maximize your ROI.
The Synopse mORMot framework consists in a huge number of units, so we will start by introducing them.
The main units you have to be familiar with are the following:
Unit name | Description |
SynCommons.pas | Common types, classes and functions |
mORMot.pas | Main unit of the ORM / SOA framework |
SynSQLite3.pas | SQLite3 database engine |
mORMotSQLite3.pas | Bridge between mORMot.pas and SynSQLite3.pas |
SynDB.pas | Direct RDBMS access classes |
mORMotDB.pas | ORM external SynDB.pas access, via SQlite3 virtual tables |
SynMongoDB.pas | Direct access to a MongoDB server |
SynSM.pas | SpiderMonkey JavaScript engine |
mORMotHttpClient.pas | RESTful HTTP/1.1 Client and Server |
mORMotMVC.pas | MVC classes for writing Web Applications |
mORMotUI*.pas | Grid and Forms User Interface generation |
mORMotToolBar.pas | ORM ToolBar User Interface generation |
mORMotReport.pas | Integrated Reporting engine |
Other units are available in the framework source code repository, but are either expected by those files above (e.g. like SynDB*.pas
database providers), or used only optionally in end-user cross-platform client applications (e.g. the CrossPlatform
folder).
In the following pages, the features offered by those units will be presented.
Do not forget to take a look at all sample projects available in the SQLite3\Samples
sub-folders - nothing is better than some simple code to look at.
Then detailed information will be available in the second part of this document - see below.
It was a design choice to use some custom low-level types, classes and functions instead of calling the official Delphi RTL.
Benefits could be:
In order to use Synopse mORMot framework, you should better be familiar with some of those definitions.
First of all, a Synopse.inc
include file is provided, and appears in most of the framework units:
{$I Synopse.inc} // define HASINLINE USETYPEINFO CPU32 CPU64
It will define some conditionals, helping write portable and efficient code.
In the following next paragraphs, we'll comment some main features of the lowest-level part of the framework, mainly located in SynCommons.pas
:
Currency
type;TDynArray
and TDynArrayHashed
);TDocVariant
custom variant
type for dynamic schema-less object or array storage.Other shared features available in SynTests.pas
and SynLog.pas
will be detailed later, i.e. Testing and Logging - see below.
Our mORMot Framework has 100% UNICODE compatibility, that is compilation under Delphi 2009 and up (including latest Delphi 10.3 Rio revision). The code has been deeply rewritten and tested, in order to provide compatibility with the String=UnicodeString
paradigm of these compilers. But the code will also handle safely Unicode for older versions, i.e. from Delphi 6 up to Delphi 2007.
From its core to its uppermost features, our framework is natively UTF-8, which is the de-facto character encoding for JSON, SQLite3, and most supported database engines. This allows our code to offer fast streaming/parsing in a SAX-like mode, avoiding any conversion between encodings from the storage layer to your business logic. We also needed to establish a secure way to use strings, in order to handle all versions of Delphi (even pre-Unicode versions, especially the Delphi 7 version we like so much), and provide compatibility with the FreePascal Compiler. This consistency allows to circumvent any RTL bug or limitation, and ease long-term support of your project.
Some string types have been defined, and used in the code for best cross-compiler efficiency:
RawUTF8
is used for every internal data usage, since both SQLite3 and JSON do expect UTF-8 encoding;WinAnsiString
where WinAnsi-encoded AnsiString
(code page 1252) are needed;string
for i18n (e.g. in unit mORMoti18n
), i.e. text ready to be used within the VCL, as either AnsiString
(for Delphi 2 to 2007) or UnicodeString
(for Delphi 2009 and later);RawUnicode
in some technical places (e.g. direct Win32 *W() API call in Delphi 7) - note: this type is NOT compatible with Delphi 2009 and later UnicodeString
;RawByteString
for byte storage (e.g. for FileFromString()
function);SynUnicode
is the fastest available Unicode native string type, depending on the compiler used (i.e. WideString
before Delphi 2009, and UnicodeString
since);UnicodeString
(defined inside {$ifdef UNICODE}...{$endif}
blocks);AnsiString
directly, but one of the types above.Note that RawUTF8
is the preferred string
type to be used in our framework when defining textual properties in a TSQLRecord
and for all internal data processing. It is only when you're reaching the User Interface layer that you may convert explicitly the RawUTF8
content into the generic VCL string
type, using either the Language. UTF8ToString
method (from mORMoti18n.pas
unit) or the following function from SynCommons.pas
:
/// convert any UTF-8 encoded String into a generic VCL Text // - it's prefered to use TLanguageFile.UTF8ToString() in mORMoti18n.pas, // which will handle full i18n of your application // - it will work as is with Delphi 2009+ (direct unicode conversion) // - under older version of Delphi (no unicode), it will use the // current RTL codepage, as with WideString conversion (but without slow // WideString usage) function UTF8ToString(const Text: RawUTF8): string;
Of course, the StringToUTF8
method or function are available to send back some text to the ORM layer.
A lot of dedicated conversion functions (including to/from numerical values) are included in SynCommons.pas
. Those were optimized for speed and multi-thread capabilities, and to avoid implicit conversions involving a temporary string
variable.
Warning during the compilation process are not allowed, especially under Unicode version of Delphi (e.g. Delphi 2010): all string conversion from the types above are made explicitly in the framework's code, to avoid any unattended data loss.
If you are using older version of Delphi, and have an existing code base involving a lot of WideString
variables, you may take a look at the SynFastWideString.pas
unit. Adding this unit in the top of your .dpr
uses clauses will let all WideString
process use the Delphi heap and its very efficient FastMM4 memory manager, instead of the much slower BSTR Windows API. Performance gain can be more than 50 times, if your existing code uses a lot of WideString
variables. Note that using this unit will break the compatibility with BSTR/COM/OLE kind of string, so is not to be used with COM objects. In all cases, if you need Unicode support with older versions of Delphi, consider using our RawUTF8
type instead, which is much better integrated with our framework, and has less overhead.
Faster and safer way of comparing two currency
values is certainly to map the variables to their internal Int64
binary representation, as such:
function CompCurrency(var A,B: currency): Int64; var A64: Int64 absolute A; B64: Int64 absolute B; begin result := A64-B64; end;
This will avoid any rounding error during comparison (working with *10000 integer values), and will be faster than the default implementation, which uses the FPU (or SSE2 under x64 architecture) instructions.
You some direct currency
handling in the SynCommons.pas
unit. It will by-pass the FPU use, and is therefore very fast.
There are some functions using the Int64
binary representation (accessible either as PInt64(@aCurrencyVar)^
or the absolute
syntax):
function Curr64ToString(Value: Int64): string;
function StrToCurr64(P: PUTF8Char): Int64;
function Curr64ToStr(Value: Int64): RawUTF8;
function Curr64ToPChar(Value: Int64; Dest: PUTF8Char): PtrInt;
function StrCurr64(P: PAnsiChar; const Value: Int64): PAnsiChar;
Using those functions can be much faster for textual conversion than using the standard FloatToText()
implementation. They are validated with provided regression tests.
Of course, in normal code, it is certainly not worth using the Int64
binary representation of currency
, but rely on the default compiler/RTL implementation. In all cases, having optimized functions was a need for both speed and accuracy of our ORM data processing, and also for below.
Version 1.13 of the SynCommons.pas
unit introduced two kinds of wrapper:
RecordEquals, RecordSave, RecordSaveLength, RecordLoad
;TDynArray
and TDynArrayHashed
objects, which are wrappers around any dynamic array.With TDynArray
, you can access any dynamic array (like TIntegerDynArray = array of integer
) using TList
-like properties and methods, e.g. Count, Add, Insert, Delete, Clear, IndexOf, Find, Sort
and some new methods like LoadFromStream, SaveToStream, LoadFrom, SaveTo, Slice, Reverse,
and AddArray
. It includes e.g. fast binary serialization of any dynamic array, even containing strings or records - a CreateOrderedIndex
method is also available to create individual index according to the dynamic array content. You can also serialize the array content into JSON, if you wish.
One benefit of dynamic arrays is that they are reference-counted, so they do not need any Create/try..finally...Free
code, and are well handled by the Delphi compiler. For performance-critical tasks, dynamic array access is very optimized, since its whole content will be allocated at once, therefore reducing the memory fragmentation and being much more CPU cache friendly.
Dynamic arrays are no replacement to a TCollection
nor a TList
(which are the standard and efficient way of storing class instances, and are also handled as published properties since revision 1.13 of the framework), but they are very handy way of having a list of content or a dictionary at hand, with no previous class nor properties definition.
You can look at them like Python's list, tuples (via records handling) and dictionaries (via Find
method, especially with the dedicated TDynArrayHashed
wrapper), in pure Delphi. Our new methods (about searching and serialization) allow most usage of those script-level structures in your Delphi code.
In order to handle dynamic arrays in our ORM, some RTTI-based structure were designed for this task. Since dynamic array of records should be necessary, some low-level fast access to the record content, using the common RTTI, has also been implemented (much faster than the "new" enhanced RTTI available since Delphi 2010).
Here is how you can have method-driven access to the dynamic array:
type TGroup: array of integer; var Group: TGroup; GroupA: TDynArray; i, v: integer; begin GroupA.Init(TypeInfo(TGroup),Group); // associate GroupA with Group for i := 0 to 1000 do begin v := i+1000; // need argument passed as a const variable GroupA.Add(v); end; v := 1500; if GroupA.IndexOf(v)<0 then // search by content ShowMessage('Error: 1500 not found!'); for i := GroupA.Count-1 downto 0 do if i and 3=0 then GroupA.Delete(i); // delete integer at index i end;
This TDynArray
wrapper will work also with array of string
or array of record
...
Records need only to be packed and have only not reference counted fields (byte, integer, double
...) or string
or variant
reference-counted fields (there is no support of nested Interface
yet). TDynArray
is able to handle record
within record
, and even dynamic arrays within record
.
Yes, you read well: it will handle a dynamic array of record
, in which you can put some string
or whatever data you need.
The IndexOf()
method will search by content. That is e.g. for an array of record
, all record fields content (including string
properties) must match.
Note that TDynArray
is just a wrapper around an existing dynamic array variable. In the code above, Add
and Delete
methods are modifying the content of the Group
variable. You can therefore initialize a TDynArray
wrapper on need, to access more efficiently any native Delphi dynamic array. TDynArray
doesn't contain any data: the elements are stored in the dynamic array variable, not in the TDynArray
instance.
Some methods were defined in the TDynArray
wrapper, which are not available in a plain TList
- with those methods, we come closer to some native generics implementation:
LoadFromStream/SaveToStream
or LoadFrom/SaveTo
methods) - it will use a proprietary but very fast binary stream layout;Sort
method in this case) or via an external integer index look-up array (using the CreateOrderedIndex
method - in this case, you can have several orders to the same data);Find
method will can use fast binary search if available.Here is how those new methods work:
var Test: RawByteString; ... Test := GroupA.SaveTo; GroupA.Clear; GroupA.LoadFrom(Test); GroupA.Compare := SortDynArrayInteger; GroupA.Sort; for i := 1 to GroupA.Count-1 do if Group[i]<Group[i-1] then ShowMessage('Error: unsorted!'); v := 1500; if GroupA.Find(v)<0 then // fast binary search ShowMessage('Error: 1500 not found!');
Some unique methods like Slice, Reverse
or AddArray
are also available, and mimic well-known Python methods.
Still closer to the generic paradigm, working for Delphi 6 up to Delphi 10.3 Rio, without the need of the slow enhanced RTTI, nor the executable size overhead and compilation issues of generics...
One common speed issue with the default usage of TDynArray
is that the internal memory buffer is reallocated when you change its length, just like a regular Delphi dynamic array.
That is, whenever you call Add
or Delete
methods, an internal call to SetLength(DynArrayVariable)
is performed. This could be slow, because it always executes some extra code, including a call to ReallocMem
.
In order not to suffer for this, you can define an external Count value, as an Integer
variable.
In this case, the Length(DynArrayVariable)
will be the memory capacity of the dynamic array, and the exact number of stored item will be available from this Count variable. A Count
property is exposed by TDynArray
, and will always reflect the number of items stored in the dynamic array. It will point either to the external Count
variable, if defined; or it will reflect the Length(DynArrayVariable)
, just as usual. A Capacity
property is also exposed by TDynArray
, and will reflect the capacity of the dynamic array: in case of an external Count variable, it will reflect Length(DynArrayVariable)
.
As a result, adding or deleting items could be much faster.
var Group: TIntegerDynArray; GroupA: TDynArray; GroupCount, i, v: integer; begin GroupA.Init(TypeInfo(TGroup),Group,@GroupCount); GroupA.Capacity := 1023; // reserver memory for i := 0 to 1000 do begin v := i+1000; // need argument passed as a const variable GroupA.Add(v); // faster than with no external GroupCount variable end; Check(GroupA.Count=1001); Check(GroupA.Capacity=1023); Check(GroupA.Capacity=length(Group));
The TDynArray
wrapper features some native JSON serialization abilities: TTextWriter. AddDynArrayJSON
and TDynArray. LoadFromJSON
methods are available for UTF-8 JSON serialization of dynamic arrays.
See below for all details about this unique feature.
The TTestLowLevelCommon._TDynArray
and _TDynArrayHashed
methods implement the automated unitary tests associated with these wrappers.
You'll find out there samples of dynamic array handling and more advanced features, with various kind of data (from plain TIntegeryDynArray
to records within records).
The TDynArrayHashed
wrapper allow implementation of a dictionary using a dynamic array of record. For instance, the prepared statement cache is handling by the following code in SynSQLite3.pas
:
TSQLStatementCache = record StatementSQL: RawUTF8; Statement: TSQLRequest; end; TSQLStatementCacheDynArray = array of TSQLStatementCache;
TSQLStatementCached = object Cache: TSQLStatementCacheDynArray; Count: integer; Caches: TDynArrayHashed; DB: TSQLite3DB; procedure Init(aDB: TSQLite3DB); function Prepare(const GenericSQL: RaWUTF8): PSQLRequest; procedure ReleaseAllDBStatements; end;
Those definitions will prepare a dynamic array storing a TSQLRequest
and SQL statement association, with an external Count
variable, for better speed.
It will be used as such in TSQLRestServerDB
:
constructor TSQLRestServerDB.Create(aModel: TSQLModel; aDB: TSQLDataBase); begin fStatementCache.Init(aDB); (...)
The wrapper will be initialized in the object constructor:
procedure TSQLStatementCached.Init(aDB: TSQLite3DB); begin Caches.Init(TypeInfo(TSQLStatementCacheDynArray),Cache,nil,nil,nil,@Count); DB := aDB; end;
The TDynArrayHashed.Init
method will recognize that the first TSQLStatementCache
field is a RawUTF8
, so will set by default an AnsiString
hashing of this first field (we could specify a custom hash function or content hashing by overriding the default nil
parameters to some custom functions).
So we can specify directly a GenericSQL
variable as the first parameter of FindHashedForAdding
, since this method will only access to the first field RawUTF8
content, and won't handle the whole record content. In fact, the FindHashedForAdding
method will be used to make all the hashing, search, and new item adding if necessary - just in one step. Note that this method only prepare for adding, and code needs to explicitly set the StatementSQL
content in case of an item creation:
function TSQLStatementCached.Prepare(const GenericSQL: RaWUTF8): PSQLRequest; var added: boolean; begin with Cache[Caches.FindHashedForAdding(GenericSQL,added)] do begin if added then begin StatementSQL := GenericSQL; // need explicit set the content Statement.Prepare(DB,GenericSQL); end else begin Statement.Reset; Statement.BindReset; end; result := @Statement; end; end;
The latest method of TSQLStatementCached
will just loop for each statement, and close them: you can note that this code uses the dynamic array just as usual:
procedure TSQLStatementCached.ReleaseAllDBStatements; var i: integer; begin for i := 0 to Count-1 do Cache[i].Statement.Close; // close prepared statement Caches.Clear; // same as SetLength(Cache,0) + Count := 0 end;
The resulting code is definitively quick to execute, and easy to read/maintain.
If your purpose is to access a dynamic array using one of its fields as key, consider using TDynArrayHashed
. This wrapper, inheriting from TDynArray
, will store an hashed index of one field of the dynamic array record
, for very efficient lookup. For a few dozen entries, it won't change the performance, but once you reach thousands of items, an index will be much faster - almost O(1) instead of O(n).
In respect to TDynArray
, TDynArrayHashed
instance lifetime should be consistent with the dynamic array itself, to ensure the hashed index is properly populated. You should also ensure that the dynamic array content is modified mainly via the TDynArrayHashed.FindHashedForAdding
TDynArrayHashed.FindHashedAndUpdate
and TDynArrayHashed.FindHashedAndDelete
methods, or explicitly call TDynArrayHashed.ReHash
when the dynamic array content has been modified.
In practice, TDynArrayHashed.FindHashed
will be much faster than a regular TDynArray.Find
call.
One step further is available with the TSynDictionary
class. It is a thread-safe dictionary to store some values from associated keys, as two separated dynamic arrays.
Each TSynDictionary
instance will hold and store the associated dynamic arrays - this is not the case with TDynArray
and TDynArrayHashed
, which are only wrappers around an existing dynamic array variable.
One big advantage is that access to TSynDictionary
methods are thread-safe by design: internally, a TSynLock
will protect the keys, maintained by a TDynArrayHashed instance, and the values by a TDynArray
. Access to/from local variables will be made via explicit copy, for perfect thread safety.
For advanced use, the TSynDictionary
offers JSON serialization and binary storage (with optional compression), and the ability to specify a timeout period in seconds, after which any call to TSynDictionary.DeleteDeprecated
will delete older entries - which is very convenient to cache values, with optional persistence on disk. Just like your own in-process Redis/MemCached instance.
With revision 1.18 of the framework, we introduced two new custom types of variant
s:
TDocVariant
kind of variant
;TBSONVariant
kind of variant
.The second custom type (which handles MongoDB-specific extensions - like ObjectID
or other specific types like dates or binary) will be presented later, when dealing with MongoDB support in mORMot, together with the BSON kind of content. BSON / MongoDB support is implemented in the SynMongoDB.pas
unit.
We will now focus on TDocVariant
itself, which is a generic container of JSON-like objects or arrays. This custom variant type is implemented in SynCommons.pas
unit, so is ready to be used everywhere in your code, even without any link to the mORMot ORM kernel, or MongoDB.
TDocVariant
implements a custom variant type which can be used to store any JSON/BSON document-based content, i.e. either:
dvObject
sub-type);dvArray
sub-type);TDocVariant
instances.Here are the main features of this custom variant type:
TDocVariantData record
;variant
type), without the need to use interfaces
or explicit try..finally
blocks;class
instance per node, but rely on pre-allocated arrays);record
serialization - see below;TSQLRecord
instance containing such variant
custom types as published properties will be recognized by the ORM core, and work as expected with any database back-end (storing the content as JSON in a TEXT column);interface
-based service - see below - is able to consume or publish such kind of content, as variant
kind of parameters;variant
instance will be displayed as JSON in the IDE debugger, making it very convenient to work with.To create instances of such variant
, you can use some easy-to-remember functions:
_Obj() _ObjFast()
global functions to create a variant
object document;_Arr() _ArrFast()
global functions to create a variant
array document;_Json() _JsonFast() _JsonFmt() _JsonFastFmt()
global functions to create any variant
object or array document from JSON, supplied either with standard or MongoDB-extended syntax.You have two non excluding ways of using the TDocVariant
storage:
variant
variables, then using either late-binding or faster _Safe()
to access its data;TDocVariantData
variables, then later on returing a variant
instance using variant(aDocVariantData)
.Note that you do not need to protect any stack-allocated TDocVariantData
instance with a try..finally
, since the compiler will do it for you. This record
type has a lot of powerful methods, e.g. to apply map/reduce on the content, or do advanced searchs or marshalling.
The more straightforward is to use late-binding to set the properties of a new TDocVariant
instance:
var V: variant; ... TDocVariant.New(V); // or slightly slower V := TDocVariant.New; V.name := 'John'; V.year := 1972; // now V contains {"name":"john","year":1982}
With _Obj()
, an object variant
instance will be initialized with data supplied two by two, as Name,Value pairs, e.g.
var V1,V2: variant; // stored as any variant ... V1 := _Obj(['name','John','year',1972]); V2 := _Obj(['name','John','doc',_Obj(['one',1,'two',2.5])]); // with nested objects
Then you can convert those objects into JSON, by two means:
VariantSaveJson()
function, which return directly one UTF-8 content;variant
instance into a string (this will be slower, but is possible).writeln(VariantSaveJson(V1)); // explicit conversion into RawUTF8 writeln(V1); // implicit conversion from variant into string // both commands will write '{"name":"john","year":1982}' writeln(VariantSaveJson(V2)); // explicit conversion into RawUTF8 writeln(V2); // implicit conversion from variant into string // both commands will write '{"name":"john","doc":{"one":1,"two":2.5}}'
As a consequence, the Delphi IDE debugger is able to display such variant values as their JSON representation. That is, V1
will be displayed as '{"name":"john","year":1982}'
in the IDE debugger Watch List window, or in the Evaluate/Modify (F7) expression tool. This is pretty convenient, and much more user friendly than any class-based solution (which requires the installation of a specific design-time package in the IDE).
You can access to the object properties via late-binding, with any depth of nesting objects, in your code:
writeln('name=',V1.name,' year=',V1.year); // will write 'name=John year=1972' writeln('name=',V2.name,' doc.one=',V2.doc.one,' doc.two=',doc.two); // will write 'name=John doc.one=1 doc.two=2.5 V1.name := 'Mark'; // overwrite a property value writeln(V1.name); // will write 'Mark' V1.age := 12; // add a property to the object writeln(V1.age); // will write '12'
Note that the property names will be evaluated at runtime only, not at compile time. For instance, if you write V1.nome
instead of V1.name
, there will be no error at compilation, but an EDocVariant
exception will be raised at execution (unless you set the dvoReturnNullForUnknownProperty
option to _Obj/_Arr/_Json/_JsonFmt
which will return a null
variant for such undefined properties).
In addition to the property names, some pseudo-methods are available for such object variant
instances:
writeln(V1._Count); // will write 3 i.e. the number of name/value pairs in the object document writeln(V1._Kind); // will write 1 i.e. ord(dvObject) for i := 0 to V2._Count-1 do writeln(V2.Name(i),'=',V2.Value(i)); // will write to the console: // name=John // doc={"one":1,"two":2.5} // age=12 if V1.Exists('year') then writeln(V1.year); V1.Add('key','value'); // add one property to the object
The variant
values returned by late-binding are generated as varByRef
, so it has two benefits:
var V: variant; ... V := _Json('{arr:[1,2]}'); V.arr.Add(3); // will work, since V.arr is returned by reference (varByRef) writeln(V); // will write '{"arr":[1,2,3]}' V.arr.Delete(1); writeln(V); // will write '{"arr":[1,3]}'
You may also trans-type your variant
instance into a TDocVariantData record
, and access directly to its internals.
For instance:
TDocVariantData(V1).AddValue('comment','Nice guy'); with TDocVariantData(V1) do // direct transtyping if Kind=dvObject then // direct access to the TDocVariantKind field for i := 0 to Count-1 do // direct access to the Count: integer field writeln(Names[i],'=',Values[i]); // direct access to the internal storage arrays
By definition, trans-typing via a TDocVariantData record
is slightly faster than using late-binding.
But you must ensure that the variant
instance is really a TDocVariant
kind of data before transtyping e.g. by calling _Safe(aVariant)^
function (or DocVariantType.IsOfType(aVariant)
or DocVariantData(aVariant)^
), which will work even for members returned as varByRef
via late binding (e.g. V2.doc
):
with _Safe(V1)^ do // note ^ to de-reference into TDocVariantData for ndx := 0 to Count-1 do // direct access to the Count: integer field writeln(Names[ndx],'=',Values[ndx]); // direct access to the internal storage arrays
writeln(V2.doc); // will write '{"name":"john","doc":{"one":1,"two":2.5}}' if DocVariantType.IsOfType(V2.Doc) then // will be false, since V2.Doc is a varByRef variant writeln('never run'); // .. so TDocVariantData(V2.doc) will fail with DocVariantData(V2.Doc)^ do // note ^ to de-reference into TDocVariantData for ndx := 0 to Count-1 do // direct access the TDocVariantData methods writeln(Names[ndx],'=',Values[ndx]); // will write to the console: // one=1 // two=2.5
In practice, _Safe(aVariant)^
may be preferred, since DocVariantData(aVariant)^
will raise an EDocVariant
exception if aVariant
is not a TDocVariant
, but _Safe(aVariant)^
will return a "fake" void DocVariant
instance, in which Count=0
and Kind=dbUndefined
.
The TDocVariantData
type features some additional U[] I[] B[] D[] O[] O_[] A[] A_[] _[]
properties, which could be used to have direct typed access to the data, as RawUTF8
, Int64
/integer
, Double
, or checking if the nested document is an O[]
bject or an A[]
rray.
You can also allocate directly the TDocVariantData
instance on stack, if you do not need any variant
-oriented access to the object, but just some local storage:
var Doc1,Doc2: TDocVariantData; ... Doc1.Init; // needed for proper initialization assert(Doc1.Kind=dvUndefined); Doc1.AddValue('name','John'); // add some properties Doc1.AddValue('birthyear',1972); assert(Doc1.Kind=dvObject); // is now identified as an object assert(Doc1.Value['name']='John'); // read access to the properties (also as varByRef) assert(Doc1.Value['birthyear']=1972); assert(Doc1.U['name']='John'); // slightly faster read access assert(Doc1.I['birthyear']=1972); writeln(Doc1.ToJSON); // will write '{"name":"John","birthyear":1972}' Doc1.Value['name'] := 'Jonas'; // update one property writeln(Doc1.ToJSON); // will write '{"name":"Jonas","birthyear":1972}' Doc2.InitObject(['name','John','birthyear',1972], aOptions+[dvoReturnNullForUnknownProperty]); // initialization from name/value pairs assert(Doc2.Kind=dvObject); assert(Doc2.Count=2); assert(Doc2.Names[0]='name'); assert(Doc2.Values[0]='John'); writeln(Doc2.ToJSON); // will write '{"name":"John","birthyear":1972}' Doc2.Delete('name'); writeln(Doc2.ToJSON); // will write '{"birthyear":1972}' assert(Doc2.U['name']=''); assert(Doc2.I['birthyear']=1972); Doc2.U['name'] := 'Paul'; Doc2.I['birthyear'] := 1982; writeln(Doc2.ToJSON); // will write '{"name":"Paul","birthyear":1982}'
You do not need to protect the stack-allocated TDocVariantData
instances with a try..finally
, since the compiler will do it for your. Take a look at all the methods and properties of TDocVariantData
.
With _Arr()
, an array variant
instance will be initialized with data supplied as a list of Value1,Value2,..., e.g.
var V1,V2: variant; // stored as any variant ... V1 := _Arr(['John','Mark','Luke']); V2 := _Obj(['name','John','array',_Arr(['one','two',2.5])]); // as nested array
Then you can convert those objects into JSON, by two means:
VariantSaveJson()
function, which return directly one UTF-8 content;variant
instance into a string (this will be slower, but is possible).writeln(VariantSaveJson(V1)); writeln(V1); // implicit conversion from variant into string // both commands will write '["John","Mark","Luke"]' writeln(VariantSaveJson(V2)); writeln(V2); // implicit conversion from variant into string // both commands will write '{"name":"john","array":["one","two",2.5]}'
As a with any object document, the Delphi IDE debugger is able to display such array variant
values as their JSON representation.
Late-binding is also available, with a special set of pseudo-methods:
writeln(V1._Count); // will write 3 i.e. the number of items in the array document writeln(V1._Kind); // will write 2 i.e. ord(dvArray) for i := 0 to V1._Count-1 do writeln(V1.Value(i),':',V2._(i)); // Value() or _() pseudo-methods // will write in the console: // John John // Mark Mark // Luke Luke if V1.Exists('John') then // Exists() pseudo-method writeln('John found in array'); V1.Add('new item'); // add "new item" to the array V1._ := 'another new item'; // add "another new item" to the array writeln(V1); // will write '["John","Mark","Luke","new item","another new item"]' V1.Delete(2); V1.Delete(1); writeln(V1); // will write '["John","Luke","another new item"]'
When using late-binding, the object properties or array items are retrieved as varByRef
, so you can even run the pseudo-methods on any nested member:
V := _Json('["root",{"name":"Jim","year":1972}]'); V.Add(3.1415); assert(V='["root",{"name":"Jim","year":1972},3.1415]'); V._(1).Delete('year'); // delete a property of the nested object assert(V='["root",{"name":"Jim"},3.1415]'); V.Delete(1); // delete an item in the main array assert(V='["root",3.1415]');
Of course, trans-typing into a TDocVariantData record
is possible, and will be slightly faster than using late-binding. As usual, using _Safe(aVariant)^
function is safer, especially when working on varByRef
members returned via late-binding.
As with an object document, you can also allocate directly the TDocVariantData
instance on stack, if you do not need any variant
-oriented access to the array:
var Doc: TDocVariantData; ... Doc.Init; // needed for proper initialization - see also Doc.InitArray() assert(Doc.Kind=dvUndefined); // this instance has no defined sub-type Doc.AddItem('one'); // add some items to the array Doc.AddItem(2); assert(Doc.Kind=dvArray); // is now identified as an array assert(Doc.Value[0]='one'); // direct read access to the items assert(Doc.Values[0]='one'); // with index check assert(Doc.Count=2); writeln(Doc.ToJSON); // will write '["one",2]' Doc.Delete(0); assert(Doc.Count=1); writeln(Doc.ToJSON); // will write '[2]'
You could use the A[]
property to retrieve an object property as a TDocVariant
array, or the A_[]
property to add a missing array property to an object, for instance:
Doc.Clear; // reset the previous Doc content writeln(Doc.A['test']); // will write 'null' Doc.A_['test']^.AddItems([1,2]); writeln(Doc.ToJSON); // will write '{"test":[1,2]}' writeln(Doc.A['test']); // will write '[1,2]' Doc.A_['test']^.AddItems([3,4]); writeln(Doc.ToJSON); // will write '{"test":[1,2,3,4]}'
With _Json()
or _JsonFmt()
, either a document or array variant
instance will be initialized with data supplied as JSON, e.g.
var V1,V2,V3,V4: variant; // stored as any variant ... V1 := _Json('{"name":"john","year":1982}'); // strict JSON syntax V2 := _Json('{name:"john",year:1982}'); // with MongoDB extended syntax for names V3 := _Json('{"name":?,"year":?}',[],['john',1982]); V4 := _JsonFmt('{%:?,%:?}',['name','year'],['john',1982]); writeln(VariantSaveJSON(V1)); writeln(VariantSaveJSON(V2)); writeln(VariantSaveJSON(V3)); // all commands will write '{"name":"john","year":1982}'
Of course, you can nest objects or arrays as parameters to the _JsonFmt()
function.
The supplied JSON can be either in strict JSON syntax, or with the MongoDB extended syntax, i.e. with unquoted property names. It could be pretty convenient and also less error-prone when typing in the Delphi code to forget about quotes around the property names of your JSON.
Note that TDocVariant implements an open interface for adding any custom extensions to JSON: for instance, if the SynMongoDB.pas
unit is defined in your application, you will be able to create any MongoDB specific types in your JSON, like ObjectID()
, NumberDecimal(""...")
,new Date()
or even /regex/option
.
As a with any object or array document, the Delphi IDE debugger is able to display such variant
values as their JSON representation.
By default, the variant
instance created by _Obj() _Arr() _Json() _JsonFmt()
will use a copy-by-value pattern. It means that when an instance is affected to another variable, a new variant
document will be created, and all internal values will be copied. Just like a record
type.
This will imply that if you modify any item of the copied variable, it won't change the original variable:
var V1,V2: variant; ... V1 := _Obj(['name','John','year',1972]); V2 := V1; // create a new variant, and copy all values V2.name := 'James'; // modifies V2.name, but not V1.name writeln(V1.name,' and ',V2.name); // will write 'John and James'
As a result, your code will be perfectly safe to work with, since V1
and V2
will be uncoupled.
But one drawback is that passing such a value may be pretty slow, for instance, when you nest objects:
var V1,V2: variant; ... V1 := _Obj(['name','John','year',1972]); V2 := _Arr(['John','Mark','Luke']); V1.names := V2; // here the whole V2 array will be re-allocated into V1.names
Such a behavior could be pretty time and resource consuming, in case of a huge document.
All _Obj() _Arr() _Json() _JsonFmt()
functions have an optional TDocVariantOptions
parameter, which allows to change the behavior of the created TDocVariant
instance, especially setting dvoValueCopiedByReference
.
This particular option will set the copy-by-reference pattern:
var V1,V2: variant; ... V1 := _Obj(['name','John','year',1972],[dvoValueCopiedByReference]); V2 := V1; // creates a reference to the V1 instance V2.name := 'James'; // modifies V2.name, but also V1.name writeln(V1.name,' and ',V2.name); // will write 'James and James'
You may think that this behavior is somewhat weird for a variant
type. But if you forget about per-value objects and consider those TDocVariant
types as a Delphi class
instance (which is a per-reference type), without the need of having a fixed schema nor handling manually the memory, it will probably start to make sense.
Note that a set of global functions have been defined, which allows direct creation of documents with per-reference instance lifetime, named _ObjFast() _ArrFast() _JsonFast() _JsonFmtFast()
. Those are just wrappers around the corresponding _Obj() _Arr() _Json() _JsonFmt()
functions, with the following JSON_OPTIONS[true]
constant passed as options parameter:
const /// some convenient TDocVariant options // - JSON_OPTIONS[false] is _Json() and _JsonFmt() functions default // - JSON_OPTIONS[true] are used by _JsonFast() and _JsonFastFmt() functions JSON_OPTIONS: array[Boolean] of TDocVariantOptions = ( [dvoReturnNullForUnknownProperty], [dvoReturnNullForUnknownProperty,dvoValueCopiedByReference]);
When working with complex documents, e.g. with BSON / MongoDB documents, almost all content will be created in "fast" per-reference mode.
By default, TDocVariantData
will only recognize integer
, Int64
and currency
- see Currency handling - as number values. Any floating point value which may not be translated to/from JSON textual representation safely will be stored as a JSON string, i.e. if it does match an integer or up to 4 fixed decimals, with 64-bit precision.
You can set the dvoAllowDoubleValue
option to TDocVariantData
, so that such numbers will be recognized and stored as double
. In this case, only varDouble
storage will be used for the variant
values, i.e. 32-bit IEEE storage, handling 5.0 x 10^-324 .. 1.7 x 10^308 range. With such floating-point values, you may loose precision and digits during the JSON serialization process. This is why it is not enabled by default.
Also note that some JSON engines do not support 64-bit integer numbers. For instance, JavaScript
engines only store up to 53-bit of information without precision loss, due to their internal storage as a 8 bytes IEEE 754 container. In some cases, it is safest to use JSON string representation of such numbers, as is done with the woIDAsIDstr
value of TTextWriterWriteObjectOption
for safe serialization of TSQLRecord.ID
ORM values.
If you want to work with high-precision floating point numbers, consider using TDecimal128
values, as implemented in SynMongoDB.pas
, which supports 128-bit high precision decimal, as defined by the IEEE 754-2008 128-bit decimal floating point standard, and handled in MongoDB 3.4+.
As stated above, a TDocVariantOptions
parameter enables to define the behavior of a TDocVariant
custom type for a given instance. Please refer to the documentation of this set of options to find out the available settings. Some are related to the memory model, other to case-sensitivity of the property names, other to the behavior expected in case of non-existing property, and so on...
Note that this setting is local to the given variant
instance.
In fact, TDocVariant
does not force you to stick to one memory model nor a set of global options, but you can use the best pattern depending on your exact process. You can even mix the options - i.e. including some objects as properties in an object created with other options - but in this case, the initial options of the nested object will remain. So you should better use this feature with caution.
You can use the _Unique()
global function to force a variant instance to have an unique set of options, and all nested documents to become by-value, or _UniqueFast()
for all nested documents to become by-reference.
// assuming V1='{"name":"James","year":1972}' created by-reference _Unique(V1); // change options of V1 to be by-value V2 := V1; // creates a full copy of the V1 instance V2.name := 'John'; // modifies V2.name, but not V1.name writeln(V1.name); // write 'James' writeln(V2.name); // write 'John' V1 := _Arr(['root',V2]); // created as by-value by default, as V2 was writeln(V1._Count); // write 2 _UniqueFast(V1); // change options of V1 to be by-reference V2 := V1; V1._(1).name := 'Jim'; writeln(V1); writeln(V2); // both commands will write '["root",{"name":"Jim","year":1972}]'
The easiest is to stick to one set of options in your code, i.e.:
_*()
global functions if your business code does send some TDocVariant
instances to any other part of your logic, for further storage: in this case, the by-value pattern does make sense;_*Fast()
global functions if the TDocVariant
instances are local to a small part of your code, e.g. used as dynamic schema-less Data Transfer Objects (DTO).In all cases, be aware that, like any class
type, the const
, var
and out
specifiers of method parameters does not behave to the TDocVariant
value, but to its reference.
In fact, whenever a dynamic schema-less storage structure is needed, you may use a TDocVariant
instance instead of class
or record
strong-typed types:
TDocVariant
in any of the TSQLRecord variant
published properties (and store them as JSON in a text column);TDocVariant
as variant
parameters of any method, which make them as perfect DTO;TDocVariant
value from the ground up, it makes a perfect fit for working with AJAX clients, in a script-like approach;SynMongoDB.pas mORMotMongoDB.pas
units to access a MongoDB server, TDocVariant
will be the native storage to create or access nested BSON arrays or objects documents - that is, it will allow proper ODM storage;record
/ dynamic array enhancements) will also benefit from this TDocVariant
custom type.We are pretty convinced that when you will start playing with TDocVariant
, you won't be able to live without it any more. It introduces the full power of late-binding and dynamic schema-less patterns to your application code, which can be pretty useful for prototyping or in Agile development. You do not need to use scripting engines like Python or JavaScript: Delphi is perfectly able to handle dynamic coding!
For date/time storage as text, the framework will use ISO 8601 encoding. Dates could be encoded as YYYY-MM-DD
or YYYYMMDD
, time as hh:mm:ss
or hhmmss
, and combined date and time representations as <date>T<time>
, i.e. YYYY-MM-DDThh:mm:ss
or YYYYMMDDThhmmss
.
The lexicographical order of the representation thus corresponds to chronological order, except for date representations involving negative years. This allows dates to be naturally sorted by, for example, file systems, or grid lists.
In addition to the default TDateTime
type, which will be serialized with a second resolution, you may use TDateTimeMS
, which will include the milliseconds, i.e. YYYY-MM-DDThh:mm:ss.sss
or YYYYMMDDThhmmss.sss
:
type TDateTimeMS = type TDateTime;
This TDateTimeMS
type is handled both during record
- see below - and dynamic array - see below - JSON serialization, and by the framework ORM
.
The SynCommons.pas
unit also defines a TTimeLog
type, and some functions able to convert to/from regular TDateTime
values:
type TTimeLog = type Int64;
This integer storage is encoded as a series of bits, which will map the TTimeLogBits
record type, as defined in SynCommons.pas
unit.
The resolution of such values is one second. In fact, it uses internally for computation an abstract "year" of 16 months of 32 days of 32 hours of 64 minutes of 64 seconds.
As a consequence, any date/time information can be retrieved from its internal bit-level representation:
The ISO 8601 standard allows millisecond resolution, encoded as hh:mm:ss.sss
or hhmmss.sss
. Our TTimeLog
/TTimeLogBits
integer encoding uses a second time resolution, and a 64-bit integer storage, so is not able to handle such precision. You could use TDateTimeMS
values instead, if milliseconds are required.
Note that since TTimeLog
type is bit-oriented, you can't just use add or subtract two TTimeLog
values when doing such date/time computation: use a TDateTime
temporary conversion in such case. See for instance how the TSQLRest.ServerTimestamp
property is computed:
function TSQLRest.GetServerTimestamp: TTimeLog; begin PTimeLogBits(@result)^.From(Now+fServerTimestampOffset); end;
procedure TSQLRest.SetServerTimestamp(const Value: TTimeLog); begin fServerTimestampOffset := PTimeLogBits(@Value)^.ToDateTime-Now; end;
But if you simply want to compare TTimeLog
kind of date/time, it is safe to directly compare their Int64
underlying value, since timestamps will be stored in increasing order, with a resolution of one second.
Due to compiler limitation in older versions of Delphi, direct typecast of a TTimeLog
or Int64
variable into a TTimeLogBits
record (as with TTimeLogBits(aTimeLog).ToDateTime
) could lead to an internal compiler error. In order to circumvent this bug, you will have to use a pointer
typecast, e.g. as in TimeLogBits(@Value)^.ToDateTime
above.
But in most case, you should better use the following functions to manage such timestamps:
function TimeLogNow: TTimeLog; function TimeLogNowUTC: TTimeLog; function TimeLogFromDateTime(DateTime: TDateTime): TTimeLog; function TimeLogToDateTime(const Timestamp: TTimeLog): TDateTime; overload; function Iso8601ToTimeLog(const S: RawByteString): TTimeLog;
See below for additional information about this TTimeLog
storage, and how it is handled by the framework ORM, via the additional TModTime
and TCreateTime
types.
As an alternative, you may use the TUnixTime
type, which is a 64-bit encoded number of seconds since the Unix Epoch, i.e. 1970-01-01 00:00:00 UTC:
type TUnixTime = type Int64;
You can convert such values:
TTimeLog
values using TTimeLogBits.ToUnixTime
and TTimeLogBits.FromUnixTime
methods;TDateTime
values using UnixTimeToDateTime
/DateTimeToUnixTime
functions;UnixTimeUTC
to return the current timestamp, calling very fast OS API.You may consider using TUnixTime
especially if the timestamp is likely to be handled in third-party clients following this encoding. In the Delphi world, TDateTime
or TTimeLog
types could be preferred.
One common problem when handling dates and times, is that common time is shown and entered as local, whereas the computer should better use non-geographic information - especially on a Client-Server architecture, where both ends may not be on the same physical region.
A time zone is a region that observes a uniform standard time for legal, commercial, and social purposes. Time zones tend to follow the boundaries of countries and their subdivisions because it is convenient for areas in close commercial or other communication to keep the same time. Most of the time zones on land are offset from Coordinated Universal Time (UTC) by a whole number of hours, or minutes. Even worse, some countries use daylight saving time for part of the year, typically by changing clocks by an hour, twice every year.
The main rule is that any date and time stored should be stored in UTC, or with an explicit Zone identifier (i.e. an explicit offset to the UTC value). Our framework expects this behavior: every date/time value stored and handled by the ORM, SOA, or any other part of it, is expected to be UTC-encoded. At presentation layer (e.g. the User Interface), conversion to/from local times should take place, so that the end-user is provided with friendly clock-wall compatible timing.
As you may guess, handling time zones is a complex task, which should be managed by the Operating System itself. Since this cultural material is constantly involving, it is updated as part of the OS.
In practice, current local time could be converted from UTC from the current system-wide time zone. One of the only parameters you have to set when installing an Operating System is to pickup the keyboard layout... and the current time zone to be used. But in a client-server environment, you may have to manage several time zones on the server side: so you can't rely on this global setting.
One sad - but predictable - disappointment is that there is no common way of encoding time zone information. Under Windows, the registry contains a list of time zones, and the associated time bias data. Most POSIX systems (including Linux and Mac OSX) do rely on the IANA database, also called tzdata
- you may have noticed that this particular package is often updated with your system. Both zone identifiers do not map, so our framework needed something to be shared on all systems.
The SynCommons.pas
unit features the TSynTimeZone
class, which is able to retrieve the information from the Windows registry into memory via TSynTimeZone.LoadFromRegistry
, or into a compressed file via TSynTimeZone.SaveToFile
. Later on, this file could be reloaded on any system, including any Linux flavor, via TSynTimeZone.LoadFromFile
, and returns the very same results. The compressed file is pretty small, thanks to its optimized layout, and use of our SynLZ
compression algorithm: the full information is stored in a 7 KB file - the same flattened information as JSON is around 130 KB, and you may compare with the official http://www.iana.org content, which weighted as a 280KB tar.gz
... Of course, tzdata
stores potentially a lot more information than we need.
In practice, you may use TSynTimeZone.Default
, which will return an instance read from the current version of the registry under Windows, and will attempt to load the information named after the executable file name (appended as a .tz
extension) on other Operating Systems.
You may therefore write:
aLocalTime := TSynTimeZone.Default.NowToLocal(aTimeZoneID);
Similarly, you may use TSynTimeZone.UtcToLocal
or TSynTimeZone.LocalToUtc
methods, with the proper TZ
identifier.
You will have to create the needed .tz
compressed file under a Windows machine, then provide this file together with any Linux server executable, in its very same folder. On a Cloud-like system, you may store this information in a centralized server, e.g. via a dedicated service - see below - generated from a single reference Windows system via TSynTimeZone.SaveToBuffer
, and later on use TSynTimeZone.LoadFromBuffer
to decode it from all your cloud nodes. The main benefit is that the time information will stay consistent whatever system it runs on, as you may expect.
Your User Interface could retrieve the IDs and ready to be displayed text from TSynTimeZone.Ids
and TSynTimeZone.Displays
properties, as plain TStrings
instance, which index will follow the TSynTimeZone.Zone[]
internal information.
As a nice side effect, the TSynTimeZone
binary internal storage has been found out to be very efficient, and much faster than a manual reading of the Windows registry. Complex local time calculation could be done on the server side, with no fear of breaking down your processing performances.
Once your application is multi-threaded, concurrent data access should be protected. Otherwise, a "race condition" issue may appear: for instance, if two threads modify a variable at the same time (e.g. decrease a counter), values may become incoherent and unsafe to use. The most known symptom is the "deadlock", by which the whole application appears to be blocked and unresponsive. On a server system, which is expected to run 24/7 with no maintenance, such an issue is to be avoided.
In Delphi, protection of a resource (which may be an object, or any variable) is usually done via Critical Sections. A critical section is an object used to make sure, that some part of the code is executed only by one thread at a time. A critical section needs to be created/initialized before it can be used and be released when it is not needed anymore. Then, some code is protected using Enter/Leave methods, which will lock its execution: in practice, only a single thread will own the critical section, so only a single thread will be able to execute this code section, and other threads will wait until the lock is released. For best performance, the protected sections should be as small as possible - otherwise the benefit of using threads may be voided, since any other thread will wait for the thread owning the critical section to release the lock.
In practice, you may use a TCriticalSection
class, or the lower-level TRTLCriticalSection
record, which is perhaps to be preferred, since it will use less memory, and could easily be included as a (protected) field to any class
definition.
Let's say we want to protect any access to the variables a and b. Here's how to do it with the critical sections approach:
var CS: TRTLCriticalSection; a, b: integer; // set before the threads start InitializeCriticalSection(CS); // in each TThread.Execute: EnterCriticalSection(CS); try // protect the lock via a try ... finally block // from now on, you can safely make changes to the variables inc(a); inc(b); finally // end of safe block LeaveCriticalSection(CS); end; // when the threads stop DeleteCriticalSection(CS);
In newest versions of Delphi, you may use a TMonitor
class, which will let the lock be owned by any Delphi TObject
. Before XE5, there was some performance issue, and even now, this Java-inspired feature may not be the best approach, since it is tied to a single object, and is not compatible with older versions of Delphi (or FPC).
Eric Grange reported some years ago - see https://www.delphitools.info/2011/11/30/fixing-tcriticalsection - that TRTLCriticalSection
(along with TMonitor
) suffers from a severe design flaw in which entering/leaving different critical sections can end up serializing your threads, and the whole can even end up performing worse than if your threads had been serialized. This is because it's a small, dynamically allocated object, so several TRTLCriticalSection
memory can end up in the same CPU cache line, and when that happens, you'll have cache conflicts aplenty between the cores running the threads.
The fix proposed by Eric is dead simple:
type TFixedCriticalSection = class(TCriticalSection) private FDummy: array [0..95] of Byte; end;
Since we wanted to use a TRTLCriticalSection
record instead of a TCriticalSection
class instance, we defined a TSynLocker
record in SynCommons.pas
:
TSynLocker = record private fSection: TRTLCriticalSection; public Padding: array[0..6] of TVarData; procedure Init; procedure Done; procedure Lock; procedure UnLock; end;
As you can see, the Padding[]
array will ensure that the CPU cache-line issue won't affect our object.
TSynLocker
use is close to TRTLCriticalSection
, with some method-oriented behavior:
var safe: TSynLocker; a, b: integer; // set before the threads start safe.Init; // in each TThread.Execute: safe.Lock try // protect the lock via a try ... finally block // from now on, you can safely make changes to the variables inc(a); inc(b); finally // end of safe block safe.Unlock; end; // when the threads stop safe.Done;
If your purpose is to protect a method execution, you may use the TSynLocker.ProtectMethod
function or explicit Lock/Unlock
, as such:
type TMyClass = class protected fSafe: TSynLocker; fField: integer; public constructor Create; destructor Destroy; override; procedure UseLockUnlock; procedure UseProtectMethod; end;
{ TMyClass }
constructor TMyClass.Create; begin fSafe.Init; // we need to initialize the lock end;
destructor TMyClass.Destroy; begin fSafe.Done; // finalize the lock inherited; end;
procedure TMyClass.UseLockUnlock; begin fSafe.Lock; try // now we can safely access any protected field from multiple threads inc(fField); finally fSafe.UnLock; end; end;
procedure TMyClass.UseProtectMethod; begin fSafe.ProtectMethod; // calls fSafe.Lock and return IUnknown local instance // now we can safely access any protected field from multiple threads inc(fField); // here fSafe.UnLock will be called when IUnknown is released end;
For your own classes definition, you may inherit from some classes providing a TSynLocker
instance, as defined in SynCommons.pas
:
TSynPersistentLocked = class(TSynPersistent) ... property Safe: TSynLocker read fSafe; end; TInterfacedObjectLocked = class(TInterfacedObjectWithCustomCreate) ... property Safe: TSynLocker read fSafe; end; TObjectListLocked = class(TObjectList) ... property Safe: TSynLocker read fSafe; end; TRawUTF8ListHashedLocked = class(TRawUTF8ListHashed) ... property Safe: TSynLocker read fSafe; end;
All those classes will initialize and finalize their owned Safe
instance, in their constructor/destructor
.
So, we may have written our class as such:
type TMyClass = class(TSynPersistentLocked) protected fField: integer; public procedure UseLockUnlock; procedure UseProtectMethod; end;
{ TMyClass }
procedure TMyClass.UseLockUnlock; begin fSafe.Lock; try // now we can safely access any protected field from multiple threads inc(fField); finally fSafe.UnLock; end; end;
procedure TMyClass.UseProtectMethod; begin fSafe.ProtectMethod; // calls fSafe.Lock and return IUnknown local instance // now we can safely access any protected field from multiple threads inc(fField); // here fSafe.UnLock will be called when IUnknown is released end;
As you can see, the Safe: TSynLocker
instance will be defined and handled at TSynPersistentLocked
parent level.
Inheriting from a TSynPersistentLocked
class (or one of its sibbling) only gives you access to a single TSynLocker
per instance. If your class inherits from TSynAutoCreateFields
, you may create one or several TAutoLocker published
properties, which will be auto-created with the instance:
type TMyClass = class(TSynAutoCreateFields) protected fLock: TAutoLocker; fField: integer; public function FieldValue: integer; published property Lock: TAutoLocker read fLock; end;
{ TMyClass }
function TMyClass.FieldValue: integer; begin fLock.ProtectMethod; result := fField; inc(fField); end;
var c: TMyClass; begin c := TMyClass.Create; Assert(c.FieldValue=0); Assert(c.FieldValue=1); c.Free; end.
In practice, TSynAutoCreateFields
is a very powerful way of defining Value objects, i.e. objects containing nested objects or even arrays of objects. You may use its ability to create the needed TAutoLocker
instances in an automated way. But be aware that if you serialize such an instance into JSON, its nested TAutoLocker
properties will be serialized as void properties - which may not be the expected result.
If your class inherits from TInjectableObject
, you may define the following:
type TMyClass = class(TInjectableObject) private fLock: IAutoLocker; fField: integer; public function FieldValue: integer; published property Lock: IAutoLocker read fLock write fLock; end;
{ TMyClass }
function TMyClass.FieldValue: integer; begin Lock.ProtectMethod; result := fField; inc(fField); end;
var c: TMyClass; begin c := TMyClass.CreateInjected([],[],[]); Assert(c.FieldValue=0); Assert(c.FieldValue=1); c.Free; end;
Here we use dependency resolution - see below - to let the TMyClass.CreateInjected
constructor scan its published
properties, and therefore search for a provider of IAutoLocker
. Since IAutoLocker
is globally registered to be resolved with TAutoLocker
, our class will initialize its fLock
field with a new instance. Now we could use Lock.ProtectMethod
to use the associated TAutoLocker
's TSynLocker
critical section, as usual.
Of course, this may sounds more complicated than manual TSynLocker
handling, but if you are writing an interface-based service - see below, your class may already inherit from TInjectableObject
for its own dependency resolution, so this trick may be very convenient.
When we fixed the potential CPU cache-line issue, do you remember that we added a padding binary buffer to the TSynLocker
definition? Since we do not want to waste resource, TSynLocker
gives easy access to its internal data, and allow to directly handle those values. Since it is stored as 7 slots of variant
values, you could store any kind of data, including complex TDocVariant
document or array.
Our class may use this feature, and store its integer field value in the internal slot 0:
type TMyClass = class(TSynPersistentLocked) public procedure UseInternalIncrement; function FieldValue: integer; end;
{ TMyClass }
function TMyClass.FieldValue: integer; begin // value read will also be protected by the mutex result := fSafe.LockedInt64[0]; end;
procedure TMyClass.UseInternalIncrement; begin // this dedicated method will ensure an atomic increase fSafe.LockedInt64Increment(0,1); end;
Please note that we used the TSynLocker.LockedInt64Increment()
method, since the following will not be safe:
procedure TMyClass.UseInternalIncrement; begin fSafe.LockedInt64[0] := fSafe.LockedInt64[0]+1; end;
In the above line, two locks are acquired (one per LockedInt64
property call), so another thread may modify the value in-between, and the increment may not be as accurate as expected.
TSynLocker
offers some dedicated properties and methods to handle this safe storage. Those expect an Index
value, from 0..6
range:
property Locked[Index: integer]: Variant read GetVariant write SetVariant; property LockedInt64[Index: integer]: Int64 read GetInt64 write SetInt64; property LockedPointer[Index: integer]: Pointer read GetPointer write SetPointer; property LockedUTF8[Index: integer]: RawUTF8 read GetUTF8 write SetUTF8; function LockedInt64Increment(Index: integer; const Increment: Int64): Int64; function LockedExchange(Index: integer; const Value: variant): variant; function LockedPointerExchange(Index: integer; Value: pointer): pointer;
You may store a pointer
or a reference to a TObject
instance, if necessary.
Having such a tool-set of thread-safe methods does make sense, in the context of our framework, which offers multi-thread server abilities - see below.
Remember that the TSynDictionary class is thread-safe. In fact, the TSynDictionary
methods are protected by a TSynLocker
instance, and internal Count or TimeOuts values are actually stored within its 7 locked storage slots.
You may consider defining TSynDictionary
instances in your business logic, or in the public API layer of your services, with proper thread safety - see below.
mORMot.pas
unit. Then it will use other units (like mORMotSQLite3.pas
, mORMotDB.pas
, SynSQLite3.pas
or SynDB.pas
) to access to the various database back-ends.
Generic access to the data is implemented by defining high-level objects as Delphi classes, descendant from a main TSQLRecord
class.
In our Client-Server ORM, those TSQLRecord
classes can be used for at least three main purposes:
SELECT / INSERT / UPDATE /DELETE
) are all created on the fly by the Object-relational mapping (ORM) core of mORMot - see below - a NoSQL engine like MongoDB can even be accessed the same way - see below;boolean
as a checkbox); to create menus and reports directly from the field definition; to have edition window generated in an automated way - see below.Our ORM engine has genuine advanced features like convention over configuration, integrated security, local or remote access, REST JSON publishing (for AJAX or mobile clients), direct access to the database (by-passing slow DB.pas
unit), content in-memory cache, optional audit-trail (change tracking), and integration with other parts of the framework (like SOA, logging, authentication...).
All the framework ORM process relies on the TSQLRecord
class. This abstract TSQLRecord
class features a lot of built-in methods, convenient to do most of the ORM process in a generic way, at record level.
It first defines a primary key field, defined as ID: TID
, i.e. as Int64
in mORMot.pas
:
type TID = type Int64; ... TSQLRecord = class(TObject) ... property ID: TID read GetID write fID; ...
In fact, our ORM relies on a Int64
primary key, matching the SQLite3 ID
/RowID
primary key.
You may be disappointed by this limitation, which is needed by the SQLite3's implementation of Virtual Tables - see below. We won't debate about a composite primary key (i.e. several fields), which is not a good idea for an ORM. In your previous RDBMS data modeling, you may be used to define a TEXT primary key, or even a GUID primary key: those kinds of keys are somewhat less efficient than an INTEGER, especially for ORM internals, since they are not monotonic. You can always define a secondary key, as string
or TGUID
field, if needed - using stored AS_UNIQUE
attribute as explained below.
All published properties of the TSQLRecord
descendant classes are then accessed via RTTI in a Client-Server RESTful architecture.
For example, a database Baby
Table is defined in Delphi code as:
/// some enumeration // - will be written as 'Female' or 'Male' in our UI Grid // - will be stored as its ordinal value, i.e. 0 for sFemale, 1 for sMale // - as you can see, ladies come first, here TSex = (sFemale, sMale);
/// table used for the Babies queries TSQLBaby = class(TSQLRecord) private fName: RawUTF8; fAddress: RawUTF8; fBirthDate: TDateTime; fSex: TSex; published property Name: RawUTF8 read fName write fName; property Address: RawUTF8 read fAddress write fAddress; property BirthDate: TDateTime read fBirthDate write fBirthDate; property Sex: TSex read fSex write fSex; end;
By adding this TSQLBaby
class to a TSQLModel
instance, common for both Client and Server, the corresponding Baby table is created by the Framework in the database engine (SQLite3 natively or any external database). All SQL work ('CREATE TABLE ...
') is done by the framework. Just code in Pascal, and all is done for you. Even the needed indexes will be created by the ORM. And you won't miss any ' or ; in your SQL query any more.
The following published properties
types are handled by the ORM, and will be converted as specified to database content (in SQLite3, an INTEGER is an Int64
, FLOAT is a double, TEXT is an UTF-8 encoded text):
Delphi | SQLite3 | Remarks |
byte | INTEGER | |
word | INTEGER | |
integer | INTEGER | |
cardinal | INTEGER | |
Int64 | INTEGER | |
boolean | INTEGER | 0 is false , anything else is true |
enumeration | INTEGER | store the ordinal value of the enumerated item(i.e. starting at 0 for the first element) |
set | INTEGER | each bit corresponding to an enumerated item (therefore a set of up to 64 elements can be stored in such a field) |
single | FLOAT | |
double | FLOAT | |
extended | FLOAT | stored as double (precision lost) |
currency | FLOAT | safely converted to/from currency type with fixed decimals, without rounding error |
RawUTF8 | TEXT | this is the preferred field type for storing some textual content in the ORM |
WinAnsiString | TEXT | WinAnsi char-set (code page 1252) in Delphi |
RawUnicode | TEXT | UCS2 char-set in Delphi, as AnsiString |
WideString | TEXT | UCS2 char-set, as COM BSTR type (Unicode in all version of Delphi) |
SynUnicode | TEXT | Will be either WideString before Delphi 2009, or UnicodeString later |
string | TEXT | Not to be used before Delphi 2009 (unless you may loose some data during conversion) - RawUTF8 is preferred in all cases |
TDateTime | TEXT | ISO 8601 encoded date time, with second resolution |
TDateTimeMS | TEXT | ISO 8601 encoded date time, with millisecond resolution |
TTimeLog | INTEGER | as proprietary fast Int64 date time |
TModTime | INTEGER | the server date time will be stored when a record is modified (as proprietary fast Int64 ) |
TCreateTime | INTEGER | the server date time will be stored when a record is created (as proprietary fast Int64 ) |
TUnixTime | INTEGER | timestamp stored as second-based Unix Time (i.e. the 64-bit number of seconds since 1970-01-01 00:00:00 UTC) |
TSQLRecord | INTEGER | 32-bit RowID pointing to another record (warning: the field value contains pointer(RowID) , not a valid object instance - the record content must be retrieved with late-binding via its ID using a PtrInt(Field) typecast or the Field.ID method), or by using e.g. CreateJoined() - 64-bit under Win64 |
TID | INTEGER | 64-bit RowID pointing to another record, but without any information about the corresponding table |
TSQLRecordMany | nothing | data is stored in a separate pivot table; this is a particular case of TSQLRecord : it won't contain pointer(RowID) , but an instance) |
TRecordReference | INTEGER | able to join any row on any table of the model, by storing both ID and TSQLRecord class type in a RecordRef -like Int64 value, with automatic reset to 0 (for TRecordReference ) or row deletion (for TRecordReferenceToBeDeleted ) when the pointed record is deleted |
TSessionUserID | INTEGER | 64-bit RowID of the TSQLAuthUser currently logged with the active session |
TPersistent | TEXT | JSON object (ObjectToJSON ) |
TCollection | TEXT | JSON array of objects (ObjectToJSON ) |
TObjectList | TEXT | JSON array of objects (ObjectToJSON ) - see TJSONSerializer. RegisterClassForJSON below |
TStrings | TEXT | JSON array of string (ObjectToJSON ) |
TRawUTF8List | TEXT | JSON array of string (ObjectToJSON ) |
any TObject | TEXT | See TJSONSerializer. RegisterCustomSerializer below |
TSQLRawBlob | BLOB | This type is an alias to RawByteString |
dynamic arrays | BLOB | in the TDynArray.SaveTo binary format |
variant | TEXT | numerical or text in JSON, or TDocVariant custom variant type for JSON objects or arrays |
TNullableInteger | INTEGER | Nullable Int64 value - see below |
TNullableBoolean | INTEGER | Nullable boolean (0/1/NULL) value - see below |
TNullableFloat | FLOAT | Nullable double value - see below |
TNullableCurrency | FLOAT | Nullable currency value - see below |
TNullableDateTime | TEXT | Nullable ISO 8601 encoded date time - see below |
TNullableTimeLog | INTEGER | Nullable TTimeLog value - see below |
TNullableUTF8Text | TEXT | Nullable Unicode text value - see below |
record | TEXT | JSON string or object, directly handled since Delphi XE5, or as defined in code by overriding TSQLRecord. InternalRegisterCustomProperties for prior versions |
TRecordVersion | INTEGER | 64-bit revision number, which will be monotonically updated each time the object is modified, to allow remote synchronization - see below |
Some additional attributes may be added to the published
field definitions:
stored AS_UNIQUE
(i.e. stored false
), it will be created as UNIQUE in the database (i.e. a SQL index will be created and uniqueness of the value will be checked at insert/update);index
number can be used for the TSQLRecord. DynArray(DynArrayFieldIndex)
method to create a TDynArray
wrapper mapping the dynamic array data;RawUTF8 / string / WideString / WinAnsiString
field of an "external" class - i.e. a TEXT field stored in a remote SynDB.pas
-based database - see below, the index
number will be used to define the maximum character size of this field, when creating the corresponding column in the database (SQLite3 or PostgreSQL does not have any such size expectations).For instance, the following class
definition will create an index for its SerialNumber
property (up to 30 characters long if stored in an external database), and will expect a link to a model of diaper (TSQLDiaperModel
) and the baby which used it (TSQLBaby
). An ID
/ RowID
column will be always available (from TSQLRecord
), so in this case, you will be able to make a fast lookup for a particular diaper from either its internal mORmot ID, or its official unique serial number:
/// table used for the Diaper queries TSQLDiaper = class(TSQLRecord) private fSerialNumber: RawUTF8; fModel: TSQLDiaperModel; fBaby: TSQLBaby; published property SerialNumber: RawUTF8 index 30 read fSerialNumber write fSerialNumber stored AS_UNIQUE; property Model: TSQLDiaperModel read fModel write fModel; property Baby: TSQLBaby read fBaby write fBaby; end;
Note that TTNullableUTF8Text
kind of property will follow the same index ###
attribute interpretation.
In practice, the generic string
type is handled (as UnicodeString
under Delphi 2009 and later), but you may loose some content if you're working with pre-Unicode version of Delphi (in which string = AnsiString
with the current system code page). So we won't recommend its usage.
The natural Delphi type to be used for TEXT storage in our framework is RawUTF8
as introduced for Unicode and UTF-8. All business process should better use RawUTF8
variables and methods (you have all necessary functions in SynCommons.pas
), then you should explicitly convert the RawUTF8
content into a string using U2S / S2U
from mORMoti18n.pas
or StringToUTF8 / UTF8ToString
which will handle proper char-set conversion according to the current i18n settings. On Unicode version of Delphi (starting with Delphi 2009), you can directly assign a string / UnicodeString
value to / from a RawUTF8
, but this implicit conversion will be slightly slower than our StringToUTF8 / UTF8ToString
functions. With pre-Unicode version of Delphi (up to Delphi 2007), such direct assignation will probably loose data for all non ASCII 7 bit characters, so an explicit call to StringToUTF8 / UTF8ToString
functions is required.
You will find in SynCommons.pas
unit all low-level RawUTF8
processing functions and classes, to be used instead of any SysUtils.pas
functions. The mORMot
core implementation about RawUTF8
is very optimized for speed and multi-threading, so it is recommended not to use string
in your code, unless you access to the VCL / User Interface layer.
Having such a dedicated RawUTF8
type will also ensure that you are not leaking your domain from its business layer to the presentation layer, as defined with Multi-tier architecture:
Delphi TDateTime
and TDateTimeMS
properties will be stored as ISO 8601 text in the database, with seconds and milliseconds resolution. See Iso8601 time and date for details about this text encoding.
As alternatives, TTimeLog / TModTime / TCreateTime
offer a proprietary fast Int64
date time format, which will map the TTimeLogBits
record type, as defined in SynCommons.pas
unit.
This format will be very fast for comparing dates or convert into/from text, and will be stored as INTEGER in the database, therefore more efficiently than plain ISO 8601 text as for TDateTime
fields.
In practice, TModTime
and TCreateTime
values are inter-exchangeable with TTimeLog
. They are just handled with a special care by the ORM, so that their associated field value will be updated with the current UTC timestamp, for every TSQLRecord
modification (for TModTime
), or at entry creation (for TCreateTime
). The time value stored is in fact the UTC timestamp, as returned from the current REST Server: in fact, when any REST client perform a connection, it will retrieve any time offset from the REST Server, which will be used to store a consistent time value across all Clients.
You may also define a TUnixTime
property, which will store the number of seconds since 1970-01-01 00:00:00 UTC as INTEGER in the database, and serialized as 64-bit JSON number. This encoding has the benefit of being handled by SQlite3 date/time functions, and interoperable with most third-party languages.
If you define a TSessionUserID
published property, this field will be automatically filled at creation or modification of the TSQLRecord
with the current TSQLAuthUser.ID
value of the active session. If no session has been initialized from the client side, 0
will be stored.
By design, and similar to TModTime
fields, you should use the ORM PUT/POST CRUD methods to compute this field value: manual SQL statements (like UPDATE Table SET Column=0
) won't set its content. Also, it is up to the client to fill the TSessionUserID
fields before sending their content to the server - the Delphi and cross-platform ORM clients will perform this assignment.
Enumerations should be mapped as INTEGER, i.e. via ord(aEnumValue)
or TEnum(aIntegerValue)
.
Enumeration sets should be mapped as INTEGER, with byte/word/integer
type, according to the number of elements in the set: for instance, byte(aSetValue)
for up to 8 elements, word(aSetValue)
for up to 16 elements, and integer(aSetValue)
for up to 32 elements in the set.
For standard floating-point values, the framework natively handles the double
and currency
kind of variables.
In fact, double
is the native type handled by most database providers - it is also native to the SSE set of opcodes of newer CPUs (as handled by Delphi XE 2 in 64-bit mode). Lack of extended
should not be problematic (if it is mandatory, a dedicated set of mathematical classes should be preferred to a database), and could be implemented with the expected precision via a TEXT field (or a BLOB mapped by a dynamic array).
The currency
type is the standard Delphi type to be used when storing and handling monetary values, native to the x87 FPU - when it comes to money, a dedicated type is worth the cost in a "rich man's world". It will avoid any rounding problems, assuming exact 4 decimals precision. It is able to safely store numbers in the range -922337203685477.5808 .. 922337203685477.5807. Should be enough for your pocket change.
As stated by the official Delphi documentation:
Currency
is a fixed-point data type that minimizes rounding errors in monetary calculations. On the Win32 platform, it is stored as a scaled 64-bit integer with the four least significant digits implicitly representing decimal places. When mixed with other real types in assignments and expressions, Currency
values are automatically divided or multiplied by 10000.
In fact, this type matches the corresponding OLE
and .Net
implementation of currency
. It is still implemented the same in the Win64 platform (since XE 2). The Int64
binary representation of the currency
type (i.e. value*10000
as accessible via a typecast like PInt64(@aCurrencyValue)^
) is a safe and fast implementation pattern.
In our framework, we tried to avoid any unnecessary conversion to float values when dealing with currency
values. Some dedicated functions have been implemented - see Currency handling - for fast and secure access to currency
published properties via RTTI, especially when converting values to or from JSON text. Using the Int64
binary representation can be not only faster, but also safer: you will avoid any rounding problem which may be introduced by the conversion to a float type. For all database process, especially with external engines, the SynDB.pas
units will try to avoid any conversion to/from double
for the dedicated ftCurrency
columns.
Rounding issues are a nightmare to track in production - it sounds safe to have a framework handling natively a currency
type from the ground up.
It is worth saying that TSQLRecord
published properties are not by default class
instances, as with regular Delphi code. After running TSQLRecord.Create()
or CreateAndFillPrepare()
constructors, you should never call aMyRecord.AnotherRecord.Property
directly, or you will raise an Access Violation.
In fact, TSQLRecord
published properties definition is used to define "one to many" or "one to one" relationships between tables. As a consequence, the nested AnotherRecord
property won't be a true class
instance, but one ID trans-typed as TSQLRecord
.
Only exception to this rule is TSQLRecordMany
kind of published properties, which, by design, are true instances, needed to access the pivot table data of "many to many" relationship. The ORM will auto-instantiate all TSQLRecordMany
published properties, then release them at Destroy
- so you do not need to maintain their life time.
Note that you may use e.g. TSQLRecord.CreateJoined()
constructor to auto-instantiate and load all TSQLRecord
published properties at once, then release them at Destroy
. - see below.
The ORM will automatically perform the following optimizations for TSQLRecord
published fields:
In fact, the ORM won't define a ON DELETE SET DEFAULT
foreign key via SQL: this feature won't be implemented at RDBMS level, but emulated at ORM level.
See below for more details about how to work with TSQLRecord
published properties.
TSQLRecord
published properties do match a class instance pointer, so are 32-bit (at least for Win32/Linux32 executables). Since the TSQLRecord.ID
field is declared as TID = Int64
, we may loose information if the stored ID
is greater than 2,147,483,647 (i.e. a signed 32-bit value).
You can define a published property as TID
to store any value of our primary key, i.e. up to 9,223,372,036,854,775,808. Note that in this case, there is no information about the joined table.
As a consequence, the ORM will perform the following optimizations for TID
fields:
TSQLRecord
published property.You can optionally specify the associated table, using a custom TID
type for the published property definition. In this case, you will sub-class TID
, using tableName
ID
as naming convention.
For instance, if you define:
type TSQLRecordClientID = type TID; TSQLRecordClientToBeDeletedID = type TID;
TSQLOrder = class(TSQLRecord) ... property Client: TID read fClient write fClient; property OrderedBy: TSQLRecordClientID read fOrderedBy write fOrderedBy; property OrderedByCascade: TSQLRecordClientToBeDeletedID read fOrderedByCascade write fOrderedByCascade; ...
Those three published fields will be able to store a Int64
foreign key, and the ORM will ensure a corresponding index is created on the database, to speedup search on their values.
But their type - TID
, TSQLRecordClientID
, or TSQLRecordClientToBeDeletedID - will define how the deletion process will be processed.
By using the generic TID
type, the first Client
property won't have any reference to any table, so no deletion tracking will take place.
On the other hand, following the type naming convention, the others OrderedBy
and OrderedByCascade
properties will be associated with the TSQLRecordClient
table of the data model.
In fact, the ORM will retrieve the 'TSQLRecordClientID'
or 'TSQLRecordClientToBeDeletedID'
type names, and search for a TSQLRecord
associated by trimming *[ToBeDeleted]ID
, which is TSQLRecordClient
in this case.
As a result, the ORM will be able to track any TSQLRecordClient
deletion: for any row pointing to the deleted record, it will ensure that this OrderedBy
property will be reset to 0, or that the row containing the OrderedByCascade
property will be deleted. Note that the framework won't define a ON DELETE SET DEFAULT
or ON DELETE CASCADE
foreign key via SQL, but emulate them at ORM level.
TSQLRecord
or TID
published properties are associated with a single TSQLRecord
joined table. You could use TRecordReference
or TRecordReferenceToBeDeleted
published properties to store a reference to any record on any table of the data model.
In fact, such properties will store in a Int64
value a reference to both a TSQLRecord
class (therefore defining a table), and one ID
(to define the row).
You could later on use e.g. TSQLRest.Retrieve(Reference)
to get a record content in one step.
One important note is to remember that the table reference is stored as an index to the TSQLRecord
class in the associated TSQLModel
.
As a consequence, for such TRecordReference*
properties to work as expected, you should ensure:
TSQLRecord
classes in the TSQLModel
do not change after any model modification: otherwise, all previously stored TRecordReference*
values may point to a wrong record;TSQLRecord
classes which are used with TRecordReference*
.Depending on the type, the ORM will track the deletion of the pointed record:
TRecordReference
fields will be reset to 0 - emulating ON DELETE SET DEFAULT
foreign key SQL declaration;TRecordReferenceToBeDeleted
will delete the whole record - emulating ON DELETE CASCADE
foreign key SQL declaration.Just like with TSQLRecord
or TSQLRecordClassName
[ToBeDeleted]ID
fields, this deletion tracking is not defined at RDBMS level, but emulated at ORM level.
In order to work easily with TRecordReference
values (which are in fact plain Int64
values), you could transtype them into the RecordRef()
record, and access the stored information via a set of helper methods. See below for an example of use of such TRecordReference
in a data model, e.g. the AssociatedRecord
property of TSQLAuditTrail
.
To sum up all possible foreign key reference available by the framework, check out this table:
Type Definition | Index | Tables | Deletion Tracking | Emulated SQL |
TSQLRecord | Yes | One | Field reset to 0 | ON DELETE SET DEFAULT |
TID | Yes | No | None | None |
TClassNameID | Yes | One | Field reset to 0 | ON DELETE SET DEFAULT |
TClassNameToBeDeletedID | Yes | One | Row deleted | ON DELETE CASCADE |
TRecordReference | Yes | All | Field reset to 0 | ON DELETE SET DEFAULT |
TRecordReferenceToBeDeleted | Yes | All | Row deleted | ON DELETE CASCADE |
It is worth saying that this deletion tracking is not defined at RDBMS level, but at ORM level.
As a consequence, it will work with any kind of databases, including NoSQL and Object-Document Mapping (ODM). In fact, RDBMS engines do not allow defining such ON DELETE
trigger on several tables, whereas mORMot handles such composite references as expected for TRecordReference
.
Since this is not a database level tracking, but only from a mORMot server, if you still use the database directly from legacy code, ensure that you will take care of this tracking, perhaps by using a SOA service instead of direct SQL statements.
The ORM will store variant
fields as TEXT in the database, serialized as JSON.
At loading, it will check their content:
variant
types are registered (e.g. MongoDB custom objects), they will be recognized as such (with extended syntax, if applying);integer
or double
) if the stored text has the corresponding layout;string
value.Since all data is stored as TEXT in the column, your queries shall ensure that any SQL WHERE statement handles it as expected (e.g. with a conversion to number before comparison). Even if SQLite3 is able to affect a column type for each row (i.e. store a variant
as in Delphi code), we did not use this feature, since we wanted our framework to work with all databases - and SQLite3 is quite alone having this feature.
At JSON level, variant
fields will be transmitted as JSON text or number, depending on the stored value.
If you use a MongoDB external NoSQL database - see below, such variant
field will not be stored as JSON text, but as true BSON documents. So you will be able to apply all the advanced search and indexing abilities of this database engine, if needed.
Since Delphi XE5, you can define and work directly with published record properties of TSQLRecord
:
TSQLMyRecord = class(TSQLRecordPeople) protected fGUID: TGUID; published property GUID: TGUID read fGUID write fGUID index 38; end;
The record will be serialized as JSON - here TGUID will be serialized as a JSON string - then will be stored as TEXT column in the database.
We specified an index 38
attribute to state that this column will contain up to 38 characters, when stored on an external database - see below.
Published properties of records are handled by our code, but Delphi doesn't create the corresponding RTTI for such properties before Delphi XE5.
So record
published properties, as defined in the above class definition, won't work directly for older versions of Delphi, or FreePascal.
You could use a dynamic array with only one element, in order to handle records within your TSQLRecord
class definition - see below. But it may be confusing.
If you want to work with such properties before Delphi XE5, you can override the TSQLRecord.InternalRegisterCustomProperties()
virtual method of a given table, to explicitly define a record
property.
For instance, to register a GUID
property mapping a TSQLMyRecord.fGUID: TGUID
field:
type TSQLMyRecord = class(TSQLRecord) protected fGUID: TGUID; class procedure InternalRegisterCustomProperties(Props: TSQLRecordProperties); override; public property GUID: TGUID read fGUID write fGUID; end;
{ TSQLMyRecord }
class procedure TSQLMyRecord.InternalRegisterCustomProperties( Props: TSQLRecordProperties); begin Props.RegisterCustomPropertyFromTypeName(self,'TGUID','GUID', @TSQLRecordCustomProps(nil).fGUID,[aIsUnique],38); end;
You may call Props.RegisterCustomPropertyFromRTTI()
, supplying the TypeInfo()
pointer, for a record containing reference-counted fields like string
, variant
or nested dynamic arrays. Of course, any custom JSON serialization of the given record
type - see below - will be supported.
Those custom record
registration methods will define either:
RegisterCustomPropertyFromRTTI()
or RegisterCustomPropertyFromTypeName()
;RegisterCustomRTTIRecordProperty()
or RegisterCustomFixedSizeRecordProperty()
.In fact, several kind of properties will be stored as BLOB in the database backend:
TSQLRawBlob
properties are how you store your binary data, e.g. images or documents;TDynArray.SaveTo
binary format);record
which were explicitly registered as BLOB columns.By default, both dynamic arrays and BLOB record content will be retrieved from the database, encoded as Base64 text.
But TSQLRawBlob
properties will be transmitted as RESTful separate resources, as required by the REST scheme. For instance, it means that a first request will retrieve all "simple" fields as JSON, then some other requests are needed to retrieve each BLOB fields as a binary buffer. As a result, TSQLRawBlob
won't be transmitted by default, to spare transmission bandwidth and resources.
You can change this default behavior, by setting:
TSQLRestClientURI.ForceBlobTransfert: boolean
property, to force the transfert of all BLOBs of all the tables of the data model - this is what is done e.g. for the SynFile main demo - see later in this document;TSQLRestClientURI.TSQLRestClientURI.ForceBlobTransfertTable[]
property, for a specified table of the model.In Delphi, nullable types do not exist, as they do for instance in C#, via the int?
kind of definition.
But at SQL and JSON levels, the NULL values do exist and are expected to be available from our ORM.
In SQLite3 itself, NULL is handled as stated in http://www.sqlite.org/lang_expr.html (see e.g. IS
and IS NOT
operators).
It is worth noting that NULL handling is not consistent among all existing database engines, e.g. when you are comparing NULL with non NULL values... so we recommend using it with care in any database statements, or only with proper (unit) testing, when you switch from one database engine to another.
By default, in the mORMot ORM/SQL code, NULL will appear only in case of a BLOB storage with a size of 0
bytes. Otherwise, you should not see it as a value, in most used types - see TSQLRecord fields definition.
Null-oriented value types have been implemented in our framework, since the object pascal language does not allow defining a nullable type (yet). We choose to store those values as variant
, with a set of TNullable*
dedicated types, as defined in mORMot.pas
:
type TNullableInteger = type variant; TNullableBoolean = type variant; TNullableFloat = type variant; TNullableCurrency = type variant; TNullableDateTime = type variant; TNullableTimeLog = type variant; TNullableUTF8Text = type variant;
In order to define a NULLable
column of such types, you could use them as types for your TSQLRecord
class definition:
type TSQLNullableRecord = class(TSQLRecord) protected fInt: TNullableInteger; fBool: TNullableBoolean; fFlt: TNullableFloat; fCurr: TNullableCurrency; fDate: TNullableDateTime; fTimestamp: TNullableTimeLog; fCLOB: TNullableUTF8Text; fText: TNullableUTF8Text; published property Int: TNullableInteger read fInt write fInt; property Bool: TNullableBoolean read fBool write fBool; property Flt: TNullableFloat read fFlt write fFlt; property Curr: TNullableCurrency read fCurr write fCurr; property Date: TNullableDateTime read fDate write fDate; property Timestamp: TNullableTimeLog read fTimestamp write fTimestamp; property CLOB: TNullableUTF8Text read fCLOB write fCLOB; property Text: TNullableUTF8Text index 32 read fText write fText; end;
Such a class will let the ORM handle SQL NULL values as expected, i.e. returning a null
variant value, or an integer/number/text value if there is something stored. Of course, the corresponding column in the database will have the expected data type, e.g. a NULLABLE INTEGER
for TNullableInteger
property.
Note that TNullableUTF8Text
is defined as a RawUTF8
usual field - see Property Attributes. That is, without any size limitation by default (as for the CLOB
property), or with an explicit size limitation using the index ###
attribute (as for Text
property, which will be converted as a VARCHAR(32)
SQL column).
You could use the following wrapper functions to create a TNullable*
value from any non-nullable standard Delphi value:
function NullableInteger(const Value: Int64): TNullableInteger; function NullableBoolean(Value: boolean): TNullableBoolean; function NullableFloat(const Value: double): TNullableFloat; function NullableCurrency(const Value: currency): TNullableCurrency; function NullableDateTime(const Value: TDateTime): TNullableDateTime; function NullableTimeLog(const Value: TTimeLog): TNullableTimeLog; function NullableUTF8Text(const Value: RawUTF8): TNullableUTF8Text;
Some corresponding constants do match the expected null
value for each kind, with strong typing (to be used for FPC compatibility, which does not allow direct assignment of a plain null: variant
to a TNullable* = type variant
property):
var NullableIntegerNull: TNullableInteger absolute NullVarData; NullableBooleanNull: TNullableBoolean absolute NullVarData; ...
You could check for a TNullable*
value to contain null, using the following functions:
function NullableIntegerIsEmptyOrNull(const V: TNullableInteger): Boolean; function NullableBooleanIsEmptyOrNull(const V: TNullableBoolean): Boolean; ...
Or retrieve a Delphi non-nullable value in one step, using the corresponding wrappers:
function NullableIntegerToValue(const V: TNullableInteger; out Value: Int64): Boolean; function NullableBooleanToValue(const V: TNullableBoolean; out Value: Boolean): Boolean; ... function NullableIntegerToValue(const V: TNullableInteger): Int64; function NullableBooleanToValue(const V: TNullableBoolean; out Value: Boolean): Boolean; ...
Those Nullable*ToValue()
functions are mandatory for use under FPC, which does not allow mixing plain variant
values and specialized TNullable* = type variant
values.
Thanks to those types, and their corresponding wrapper functions, you have at hand everything needed to safely store some nullable values into your application database, with proper handling on Delphi side.
To access a particular record, the following code can be used to handle CRUD statements (Create Retrieve Update Delete actions are implemented via Add/Update/Delete/Retrieve methods), following the RESTful pattern - see below, and using the ID
primary key as resource identifier:
procedure Test(Client: TSQLRest); // we will use CRUD operations on a REST instance var Baby: TSQLBaby; // store a record ID: TID; // store a reference to a record begin // create and save a new record, since Smith, Jr was just born Baby := TSQLBaby.Create; try Baby.Name := 'Smith'; Baby.Address := 'New York City'; Baby.BirthDate := Date; Baby.Sex := sMale; ID := Client.Add(Baby,true); finally Baby.Free; // manage memory as usual end; // update record data Baby := TSQLBaby.Create(Client,ID); // retrieve from ID try assert(Baby.Name='Smith'); Baby.Name := 'Smeeth'; Client.Update(Baby); finally Baby.Free; end; // retrieve record data Baby := TSQLBaby.Create; try Client.Retrieve(ID,Baby); // we may have written: Baby := TSQLBaby.Create(Client,ID); assert(Baby.Name='Smeeth'); finally Baby.Free; end; // delete the created record Client.Delete(TSQLBaby,ID); end;
Of course, you can have a TSQLBaby
instance alive during a longer time. The same TSQLBaby
instance can be used to access several record content, and call Retrieve / Add / Delete / Update
methods on purpose.
No SQL statement to write, nothing to care about database engine expectations (e.g. for date or numbers processing): just accessing objects via high-level methods. It could even work with NoSQL databases, like a fast TObjectList
or MongoDB. This is the magic of ORM.
To be honest, the REST pattern does not match directly the CRUD operations exactly. We had to tied a little bit the REST verbs - as defined below - to fit our ORM purpose. But all you have to know is that those Add/Update/Delete/Retrieve methods are able to define the full persistence lifetime of your precious objects.
You can query your table with the FillPrepare
or CreateAndFillPrepare
methods, for instance all babies with balls and a name starting with the letter 'A':
var aMale: TSQLBaby; ... aMale := TSQLBaby.CreateAndFillPrepare(Client, 'Name LIKE ? AND Sex = ?',['A%',ord(sMale)]); try while aMale.FillOne do DoSomethingWith(aMale); finally aMale.Free; end;
This request loops through all matching records, accessing each row content via a TSQLBaby
instance.
The mORMot
engine will create a SQL statement with the appropriate SELECT query, retrieve all data as JSON, transmit it between the Client and the Server (if any), then convert the values into properties of our TSQLBaby
object instance. Internally, the [CreateAnd]FillPrepare
/ FillOne
methods use a list of records, retrieved as JSON from the Server, and parsed in memory one row a time (using an internal TSQLTableJSON
instance).
Note that there is an optional aCustomFieldsCSV
parameter available in all FillPrepare / CreateAndFillPrepare
methods, by which you may specify a CSV list of field names to be retrieved. It may save some remote bandwidth, if not all record fields values are needed in the loop. Note that you should use this aCustomFieldsCSV
parameter only to retrieve some data, and that the other fields will remain untouched (i.e. void in case of CreateAndFillPrepare
): any later call to Update
should lead into a data loss, since the method will know that is has been called during a FillPrepare / CreateAndFillPrepare
process, and only the retrieved filled will be updated on the server side.
You could also create a TObjectList
, or - even better for newer versions of Delphi supporting the generics syntax - a TObjectList<T>
instance to retrieve all values of a table:
var aList: TObjectList<TSQLBaby>; aMale: TSQLBaby; ... aList := Client.RetrieveList<TSQLBaby>( 'Name LIKE ? AND Sex = ?',['A%',ord(sMale)]); try for aMale in aList do DoSomethingWith(aMale); finally aList.Free; end;
Note that this method will use more memory and resources than a *FillPrepare
call followed by a while ...FillOne do
loop, since the later will only allocate one instance of the TSQLRecord
, then fill the properties of this single instance directly from the returned JSON content, one at a time. For huge lists, or in multi-threaded environement, it may make a difference.
But the generics syntax can make cleaner code, or more integrated with your business logic.
For safer and faster database process, the WHERE clause of the request expects some parameters to be specified. They are bound in the ?
appearance order in the WHERE clause of the [CreateAnd]FillPrepare
query method.
Standard simple kind of parameters (RawUTF8, integer, double, currency
..) can be bound directly - as in the sample code above for Name
or Sex
properties. The first parameter will be bound as 'A%' RawUTF8
TEXT, and the second as the 1
INTEGER value.
Any TDateTime
bound parameter shall better be specified using DateToSQL()
, DateTimeToSQL()
or TimeLogToSQL()
functions, as such:
aRec.CreateAndFillPrepare(Client,'Datum=?',[DateToSQL(EncodeDate(2012,5,4))]); aRec.CreateAndFillPrepare(Client,'Datum>=?',[DateToSQL(2012,5,4)]); aRec.CreateAndFillPrepare(Client,'Datum<=?',[DateTimeToSQL(Now)]); aRec.CreateAndFillPrepare(Client,'Datum<=?',[TimeLogToSQL(Client.ServerTimestamp)]);
For TTimeLog / TModTime / TCreateTime / TUnixTime
kind of properties, please use the underlying Int64
value as bound parameter.
As stated previously, BLOB (i.e. sftBlob
or TSQLRawBlob
) properties are handled separately, via dedicated RetrieveBlob
and UpdateBlob
method calls (or their global RetrieveBlobFields
/ UpdateBlobFields
twins). In fact, BLOB data is expected to be potentially big (more than a few MB). But you can specify a small BLOB content using an explicit conversion to the corresponding TEXT format, by calling BinToBase64WithMagic()
overloaded functions when preparing an UPDATE query, or by defining a TByteDynArray
published field instead of TSQLRawBlob
.
See also ForceBlobTransfert
and ForceBlobTransfertTable[]
properties of TSQLRestClientURI
.
Note that there was a breaking change about the TSQLRecord.Create / FillPrepare / CreateAndFillPrepare
and TSQLRest.OneFieldValue / MultiFieldValues
methods: for historical reasons, they expected parameters to be marked as %
in the SQL WHERE clause, and inlined via :(...):
as stated below - since revision 1.17 of the framework, those methods expect parameters marked as ?
and with no :(...):
. Due to this breaking change, user code review is necessary if you want to upgrade the engine from 1.16 or previous. In all cases, using ?
is less confusing for new users, and more close to the usual way of preparing database queries - e.g. as stated below. Both TSQLRestClient.ExecuteFmt / ListFmt
methods are not affected by this change, since they are just wrappers to the FormatUTF8()
function.
For the most complex codes, you may want to prepare ahead the WHERE clause of the ORM request. You may use the overloaded FormatUTF8()
function as such:
var where: RawUTF8; begin where := FormatUTF8('id=?', [], [SomeID]); if add_active then where := FormatUTF8('% and active=?', [where], [ActiveFlag]); if add_date_ini then where := FormatUTF8('% and date_ini>=?', [where], [DateToSQL(Date-2)]); ...
Then the request will be easy to create, and fast to execute, thanks to prepared statements in the framework database layer.
As we stated above, [CreateAnd]FillPrepare
/ FillOne
methods are implemented via an internal TSQLTableJSON
instance.
In short, TSQLTableJSON
will expect some JSON content as input, will parse it in rows and columns, associate it with one or more optional TSQLRecord
class types, then will let you access the data via its Get*
methods.
You can use this TSQLTableJSON
class as in the following example:
procedure WriteBabiesStartingWith(const Letters: RawUTF8; Sex: TSex); var aList: TSQLTableJSON; Row: integer; begin aList := Client.MultiFieldValues(TSQLBaby,'ID,BirthDate', 'Name LIKE ? AND Sex = ?',[Letters+'%',ord(Sex)]); if aList=nil then raise Exception.Create('Impossible to retrieve data from Server'); try for Row := 1 to aList.RowCount do writeln('ID=',aList.GetAsInteger(Row,0),' BirthDate=',aList.Get(Row,1)); finally aList.Free; end; end;
For a record with a huge number of fields, specifying the needed fields could save some bandwidth. In the above sample code, the ID
column has a field index of 0 (so is retrieved via aList.GetAsInteger(Row,0)
) and the BirthDate
column has a field index of 1 (so is retrieved as a PUTF8Char
via aList.Get(Row,1)
). All data rows are processed via a loop using the RowCount
property count - first data row is indexed as 1, since the row 0 will contain the column names.
The TSQLTable
class has some methods dedicated to direct cursor handling, as such:
procedure WriteBabiesStartingWith(const Letters: RawUTF8; Sex: TSex); var aList: TSQLTableJSON; begin aList := Client.MultiFieldValues(TSQLBaby,'ID,BirthDate', 'Name LIKE ? AND Sex = ?',[Letters+'%',ord(Sex)]); try while aList.Step do writeln('ID=',aList.Field(0),' BirthDate=',aList.Field(1)); finally aList.Free; end; end;
By using the TSQLTable.Step
method, you do not need to check that aList<>nil
, since it will return false if aList
is not assigned. And you do not need to access the RowCount
property, nor specify the current row number.
We may have used not the field index, but the field name, within the loop:
writeln('ID=',aList.Field('ID'),' BirthDate=',aList.Field('BirthDate'));
You can also access the field values using late-binding and a local variant
, which gives some perfectly readable code:
procedure WriteBabiesStartingWith(const Letters: RawUTF8; Sex: TSex); var baby: variant; begin with Client.MultiFieldValues(TSQLBaby,'ID,BirthDate', 'Name LIKE ? AND Sex = ?',[Letters+'%',ord(Sex)]) do try while Step(false,@baby) do writeln('ID=',baby.ID,' BirthDate=',baby.BirthDate); finally Free; end; end;
In the above code, late-binding will search for the "ID"
and "BirthDate"
fields at runtime. But the ability to write baby.ID
and baby.BirthDate
is very readable. Using a with ... do
statement makes the code shorter, but should be avoided if it leads into confusion, e.g. in case of more complex process within the loop.
See also the following methods of TSQLRest
: OneFieldValue
, OneFieldValues
, MultiFieldValue
, MultiFieldValues
which are able to retrieve either a TSQLTableJSON
, or a dynamic array of integer
or RawUTF8
. And also List
and ListFmt
methods of TSQLRestClient
, if you want to make a JOIN
against multiple tables at once.
A TSQLTableJSON
content can be associated to a TGrid
in order to produce an User Interface taking advantage of the column types, as retrieved from the associated TSQLRecord
RTTI. The TSQLTableToGrid
class is able to associate any TSQLTable
to a standard TDrawGrid
, with some enhancements: themed drawing, handle Unicode, column types (e.g. boolean
are displayed as check-boxes, dates as text, etc...), column auto size, column sort, incremental key lookup, optional hide IDs, selection...
(this paragraph is not mandatory to be read at first, so you can skip it if you do not need to know about the mORMot internals - just remember that ? bound parameters are inlined as :(...):
in the JSON transmitted content so can be set directly as such in any WHERE clause)
If you consider the first sample code:
aMale := TSQLBaby.CreateAndFillPrepare(Client, 'Name LIKE ? AND Sex = ?',['A%',ord(sMale)]);
This will execute a SQL statement, with an ORM-generated SELECT, and a WHERE clause using two parameters bound at execution, containing 'A%' RawUTF8
text and 1 integer value.
In fact, from the SQL point of view, the CreateAndFillPrepare()
method as called here is exactly the same as:
aMale := TSQLBaby.CreateAndFillPrepare(Client, 'Name LIKE :(''A%''): AND Sex = :(1):');
or
aMale := TSQLBaby.CreateAndFillPrepare(Client, 'Name LIKE :(%): AND Sex = :(%):',['''A%''',ord(sMale)],[]));
or
aMale := TSQLBaby.CreateAndFillPrepare(Client, FormatUTF8('Name LIKE :(%): AND Sex = :(%):',['''A%''',ord(sMale)]));
First point is that the 'A'
letter has been embraced with quotes, as expected per the SQL syntax. In fact, Name LIKE :(%): AND Sex = :(%):', ['''A%''',ord(sMale)]
is expected to be a valid WHERE clause of a SQL statement.
Note we used single quotes, but we may have used double quotes (") inside the :( ):
statements. In fact, SQLite3 expects single quotes in its raw SQL statements, whereas our prepared statements :( ):
will handle both single ' and double " quotes. Just to avoid any confusion, we'll always show single quotes in the documentation. But you can safely use double quotes within :( ):
statements, which could be more convenient than single quotes, which should be doubled within a pascal constant string ''
.
The only not-obvious syntax in the above code is the :(%):
used for defining prepared parameters in the format string.
In fact, the format string will produce the following WHERE clause parameter as plain text:
aMale := TSQLBaby.CreateAndFillPrepare(Client, 'Name LIKE :(''A%''): AND Sex = :(1):');
So that the following SQL query will be executed by the database engine, after translation by the ORM magic:
SELECT * FROM Baby WHERE Name LIKE ? AND Sex = ?;
With the first ?
parameter bound with 'A%'
value, and the second with 1
.
In fact, when the framework finds some :( ):
in the SQL statement string, it will prepare a SQL statement, and will bound the parameters before execution (in our case, text A
and integer 1
), reusing any previous matching prepared SQL statement. See below for more details about this mechanism.
To be clear, without any prepared statement, you could have used:
aMale := TSQLBaby.CreateAndFillPrepare(Client, 'Name LIKE % AND Sex = %',['''A%''',ord(sMale)],[]);
or
aMale := TSQLBaby.CreateAndFillPrepare(Client, FormatUTF8('Name LIKE % AND Sex = %',['''A%''',ord(sMale)]));
which will produce the same as:
aMale := TSQLBaby.CreateAndFillPrepare(Client, 'Name LIKE ''A%'' AND Sex = 1');
So that the following SQL statement will be executed:
SELECT * FROM Baby WHERE Name LIKE 'A%' AND Sex = 1;
Note that we prepared the SQL WHERE clause, so that we could use the same request statement for all females with name starting with the character 'D':
aFemale := TSQLBaby.CreateAndFillPrepare(Client, 'Name LIKE :(%): AND Sex = :(%):', ['''D%''',ord(sFemale)]);
Using a prepared statement will speed up the database engine, because the SQL query will have to be parsed and optimized only once.
The second query method, i.e.
aList := Client.MultiFieldValues(TSQLBaby,'ID,BirthDate', 'Name LIKE ? AND Sex = ?',[Letters+'%',ord(Sex)]);
is the same as this code:
aList := Client.MultiFieldValues(TSQLBaby,'ID,BirthDate', 'Name LIKE :(%): AND Sex = :(%):',[QuotedStr(Letters+'%'),ord(Sex)],[]);
or
aList := Client.MultiFieldValues(TSQLBaby,'ID,BirthDate', FormatUTF8('Name LIKE :(%): AND Sex = :(%):',[QuotedStr(Letters+'%'),ord(Sex)]));
In both cases, the parameters will be inlined, in order to prepare the statements, and improve execution speed.
We used the QuotedStr
standard function to embrace the Letters
parameter with quotes, as expected per the SQL syntax.
Of course, using '?' and bounds parameters is much easier than '%' and manual :(%):
in-lining with a QuotedStr()
function call. In your client code, you should better use '?' - but if you find some ':(%):'
in the framework source code and when a WHERE clause is expected within the transmitted JSON content, you won't be surprised.
Working with objects is pretty powerful, but requires to handle manually the created instances life time, via try
.. finally
blocks. Most of the time, the TSQLRecord
life time will be very short: we allocate one instance on a local variable, then release it when it goes out of scope.
If we take again the TSQLBaby
sample, we may write:
function NewMaleBaby(Client: TSQLRest; const Name,Address: RawUTF8): TID; var Baby: TSQLBaby; // store a record begin Baby := TSQLBaby.Create; try Baby.Name := Name; Baby.Address := Address; Baby.BirthDate := Date; Baby.Sex := sMale; result := Client.Add(Baby,true); finally Baby.Free; end; end;
To ease this pretty usual pattern, the framework offers some kind of automatic memory management at TSQLRecord
level:
function NewMaleBaby(Client: TSQLRest; const Name,Address: RawUTF8): TID; var Baby: TSQLBaby; // store a record begin TSQLBaby.AutoFree(Baby); // no try..finally needed! Baby.Name := Name; Baby.Address := Address; Baby.BirthDate := Date; Baby.Sex := sMale; result := Client.Add(Baby,true); end; // local Baby instance will be released here
It may also be useful for queries.
Instead of writing:
var aMale: TSQLBaby; ... aMale := TSQLBaby.CreateAndFillPrepare(Client, 'Name LIKE ? AND Sex = ?',['A%',ord(sMale)]); try while aMale.FillOne do DoSomethingWith(aMale); finally aMale.Free; end;
We may write:
var aMale: TSQLBaby; ... TSQLBaby.AutoFree(aMale,Client,'Name LIKE ? AND Sex = ?',['A%',ord(sMale)]); while aMale.FillOne do DoSomethingWith(aMale);
Without the need to write the try ... finally
block.
See the TSQLRecord.AutoFree()
overloaded methods in mORMot.pas
for the several use cases, and the associated TAutoFree
/ IAutoFree
types as defined in SynCommons.pas
. Note that you can handle several local variables in a single TSQLRecord.AutoFree()
or TAutoFree.Create()
initialization.
Be aware that it does not introduce some kind of magic garbage collector, as available in C# or Java. It is not even similar to the ARC
memory model used by Apple and the Delphi NextGen compiler. It is just some syntaxic sugar creating a local hidden IAutoFree
interface, which will be released at the end of the local method by the compiler, and also release all associated class instances. So the local class instances should stay in the local scope, and should not be sent and stored in another process: in such cases, you may encounter access violation issues.
Due to an issue (feature?) in the FPC implementation of interfaces - see http://bugs.freepascal.org/view.php?id=26602 - the above code will not work directly. You should assign the result of this method to a local IAutoFree
variable, as such:
var aMale: TSQLBaby;
auto: IAutoFree;
...
auto := TSQLBaby.AutoFree(aMale,Client,'Name LIKE ? AND Sex = ?',['A%',ord(sMale)]);
while aMale.FillOne do
DoSomethingWith(aMale);
One alternative may be to use a with
statement, which prevents the need of defining a local variable:
var aMale: TSQLBaby; ... with TAutoFree.One(aMale,TSQLBaby.CreateAndFillPrepare(Client, 'Name LIKE ? AND Sex = ?',['A%',ord(sMale)])) do while aMale.FillOne do DoSomethingWith(aMale);
Or use one of the TSQLRecord.AutoFree
overloaded class
methods:
var aMale: TSQLBaby; ... with TSQLBaby.AutoFree(aMale,Client,'Name LIKE ? AND Sex = ?',['A%',ord(sMale)]) do while aMale.FillOne do DoSomethingWith(aMale);
If you want your code to cross-compile with both Delphi and FPC, consider this expectation of the FPC compiler.
All previous code is fine if your application requires "flat" data. But most of the time, you'll need to define master/child relationship, perhaps over several levels. In data modeling, the cardinality of one data table with respect to another data table is a critical aspect of database design. Relationships between data tables define cardinality when explaining how each table links to another.
In the relational model, tables can have the following cardinality, i.e. can be related as any of:
Our mORMot framework handles all those kinds of cardinality.
In order to handle "One to one" or "One to many" relationship between tables (i.e. normalized Master/Detail in a classical RDBMS approach), you could define TSQLRecord
published properties in the object definition.
For instance, you could declare classes as such:
TSQLMyFileInfo = class(TSQLRecord) private FMyFileDate: TDateTime; FMyFileSize: Int64; published property MyFileDate: TDateTime read FMyFileDate write FMyFileDate; property MyFileSize: Int64 read FMyFileSize write FMyFileSize; end;
TSQLMyFile = class(TSQLRecord) private FSecondOne: TSQLMyFileInfo; FFirstOne: TSQLMyFileInfo; FMyFileName: RawUTF8; published property MyFileName: RawUTF8 read FMyFileName write FMyFileName; property FirstOne: TSQLMyFileInfo read FFirstOne write FFirstOne; property SecondOne: TSQLMyFileInfo read FSecondOne write FSecondOne; end;
As stated by TSQLRecord fields definition, TSQLRecord
published properties do not contain an instance of the TSQLRecord
class. They will instead contain pointer(RowID)
, and will be stored as an INTEGER
in the database.
So the main rule is to never use directly such published properties, as if they were regular class instance: otherwise you'll have an unexpected access violation error.
When creating such records, use temporary instances for each detail object, as such:
var One, Two: TSQLMyFileInfo; MyFile: TSQLMyFile; begin One := TSQLMyFileInfo.Create; Two := TSQLMyFileInfo.Create; MyFile := TSQLMyFile.Create; try One.MyFileDate := .... One.MyFileSize := ... MyFile.FirstOne := TSQLMyFileInfo(MyDataBase.Add(One,True)); // add One and store ID in MyFile.FirstOne Two.MyFileDate := .... Two.MyFileSize := ... MyFile.SecondOne:= TSQLMyFileInfo(MyDataBase.Add(Two,True)); // add Two and store ID in MyFile.SecondOne MyDataBase.Add(MyFile,true); finally MyFile.Free; Two.Free; One.Free; end; end;
Note that you those two assignments are the same:
MyFile.FirstOne := TSQLMyFileInfo(MyDataBase.Add(One,True)); MyFile.FirstOne := pointer(MyDataBase.Add(One,True));
Or you may have added the One
row first:
MyDatabase.Add(One,true);
then assigned it to the MyFile
record on one of the following expressions:
MyFile.FirstOne := TSQLMyFileInfo(One.ID);
MyFile.FirstOne := pointer(One.ID);
MyFile.FirstOne := One.AsTSQLRecord;
The first two statements, using a class/pointer
type cast will work only in 32-bit (since ID is an integer). Using TSQLRecord.AsTSQLRecord
property will work on all platforms, including 64-bit, and is perhaps easier to deal with in your code.
When accessing the detail objects, you should not access directly to FirstOne
or SecondOne
properties (there are not class instances, but integer IDs), then use instead the TSQLRecord. Create(aClient: TSQLRest; aPublishedRecord: TSQLRecord: ForUpdate: boolean=false)
overloaded constructor, as such:
var One: TSQLMyFileInfo;
MyFile: TSQLMyFile;
begin
MyFile := TSQLMyFile.Create(Client,aMyFileID);
try
// here MyFile.FirstOne.MyFileDate will trigger an access violation
One := TSQLMyFileInfo.Create(Client,MyFile.FirstOne);
try
// here you can access One.MyFileDate or One.MyFileSize
finally
One.Free;
end;
finally
MyFile.Free;
end;
end;
Or with a with
statement:
var MyFile: TSQLMyFile; begin MyFile := TSQLMyFile.Create(Client,aMyFileID); try // here MyFile.FirstOne.MyFileDate will trigger an access violation with TSQLMyFileInfo.Create(Client,MyFile.FirstOne) do try // here you can access MyFileDate or MyFileSize finally Free; end; finally MyFile.Free; end; end;
Mapping a TSQLRecord
field into an integer
ID is a bit difficult to learn at first. It was the only way we found out in order to define a "one to one" or "one to many" relationship within the class definition, without any property attribute features of the Delphi compiler (only introduced in newer versions). The main drawback is that the compiler won't be able to identify at compile time some potential GPF issues at run time. This is up to the developer to write correct code, when dealing with TSQLRecord
properties. Using AsTSQLRecord
property and overloaded TSQLRecord. Create(aPublishedRecord)
constructor will help a lot.
Having to manage at hand all nested TSQLRecord
instances can be annoying, and error-prone.
As an alternative, if you want to retrieve a whole TSQLRecord
instance including its nested TSQLRecord
published properties, you can use either of those two constructors:
TSQLRecord.CreateJoined(aClient,aID)
;TSQLRecord.CreateAndFillPrepareJoined()
, followed by a while FillOne do ....
loop.Both constructors:
TSQLRecord
published properties;TSQLRecord
via a SELECT .... LEFT JOIN ...
statement;TSQLRecord
will be released at Destroy
of the main instance (to avoid any unexpected memory leak).So you can safely write:
var MyFile: TSQLMyFile;
begin
MyFile := TSQLMyFile.CreateJoined(Client,aMyFileID);
try
// here MyFile.FirstOne and MyFile.SecondOne are true instances
// and have already retrieved from the database by the constructor
// so you can safely access MyFile.FirstOne.MyFileDate or MyFile.SecondOne.MyFileSize here!
finally
MyFile.Free; // will release also MyFile.FirstOne and MyFile.SecondOne
end;
end;
Note that this will work as expected when retrieving some data from the database, but, in the current implementation of the ORM, any Update()
call will manage only the main TSQLRecord
properties, and the nested TSQLRecord
properties ID
, not the nested properties values. For instance, in code above, aClient.Update(MyFile)
will update the TSQLMyFile
table, but won't reflect any modification to MyFile.FirstOne
or MyFile.SecondOne
properties. This limitation may be removed in the future - you may ask explicitly for this feature request.
As http://en.wikipedia.org/wiki/Many-to-many_(data_model) wrote:
In systems analysis, a many-to-many relationship is a type of cardinality that refers to the relationship between two entities (see also Entity-Relationship Model) A and B in which A may contain a parent row for which there are many children in B and vice versa. For instance, think of A as Authors, and B as Books. An Author can write several Books, and a Book can be written by several Authors. Because most database management systems only support one-to-many relationships, it is necessary to implement such relationships physically via a third and fourth junction table, say, AB with two one-to-many relationships A -> AB and B -> AB. In this case the logical primary key for AB is formed from the two foreign keys (i.e. copies of the primary keys of A and B).
From the record point of view, and to follow the ORM vocabulary (in Ruby on Rails, Python, or other ActiveRecord clones), we could speak of "has many" relationship. In the classic RDBMS implementation, a pivot table is created, containing two references to both related records. Additional information can be stored within this pivot table. It could be used, for instance, to store association time or corresponding permissions of the relationship. This is called a "has many through" relationship.
In fact, there are several families of ORM design, when implementing the "many to many" cardinality:
JOIN
ed query from the ORM (i.e. pivot tables are abstracted from object lists or collections by the framework, to implement "has many" relationship, but you will have to define lazy loading and won't have "has many through" relationship at hand);In the mORMot framework, we did not implement the 1st implementation pattern, but the 2nd and 3rd:
TSQLRecordMany
classes, which allows some true pivot table to be available (that is the 2nd family), introducing true "has many through" cardinality;TCollection
(of TPersistent
classes) or dynamic arrays within one TSQLRecord
class, and data sharding (i.e. the 3rd family).Up to now, there is no explicit Lazy Loading feature in our ORM. There is no native handling of TSQLRecord
collections or lists (as they do appear in the first family of ORMs). This could sound like a limitation, but it allows to manage exactly the data to be retrieved from the server in your code, and maintain bandwidth and memory use as low as possible. Use of a pivot table (via the TSQLRecordMany
kind of records) allows tuned access to the data, and implements optimal lazy loading feature. Note that the only case when some TSQLRecord
instances are automatically created by the ORM is for those TSQLRecordMany
published properties.
Defining a pivot table is a classic and powerful use of relational database, and unleash its power (especially when linked data is huge).
But it is not easy nor natural to properly handle it, since it introduces some dependencies from the DB layer into the business model. For instance, it does introduce some additional requirements, like constraints / integrity checking and tables/classes inter-dependency.
Furthermore, in real life, we do not have such a separated storage, but we store all details within the main data. So for a Domain-Driven Design, which tries to map the real objects of its own domain, such a pivot table is breaking the business logic. With today's computer power, we can safely implement a centralized way of storing data into our data repository.
Let us quote what wikipedia states at http://en.wikipedia.org/wiki/Shared_nothing_architecture
A shared nothing architecture (SN) is a distributed computing architecture in which each node is independent and self-sufficient, and there is no single point of contention across the system. People typically contrast SN with systems that keep a large amount of centrally-stored state information, whether in a database, an application server, or any other similar single point of contention.
As we stated in TSQLRecord fields definition, in our ORM, high-level types like dynamic arrays or TPersistent
/ TCollection
properties are stored as BLOB or TEXT inside the main data row. There is no external linked table, no Master/Detail to maintain. In fact, each TSQLRecord
instance content could be made self-contained in our ORM.
In particular, you may consider using our TDocVariant custom variant type stored in a variant
published property. It will allow to store any complex document, of nested objects or objects. They will be efficiently stored and transmitted as JSON.
When the server starts to have an increasing number of clients, such a data layout could be a major benefit. In fact, the so-called sharding, or horizontal partitioning of data, is a proven solution for web-scale databases, such as those in use by social networking sites. How does EBay or Facebook scale with so many users? Just by sharding.
A simple but very efficient sharding mechanism could therefore be implemented with our ORM. In-memory databases, or SQLite3 are good candidate for light speed data process. Even SQLite could scale very well in most cases, when properly used - see below.
Storing detailed data in BLOB or in TEXT as JSON could first sounds a wrong idea. It does break one widely accepted principle of the RDBMS architecture. But even Google had to break this dogma. And when MySQL or any similar widely used databases try to implement sharding, it does need a lot of effort. Others, like the NoSQL MongoDB, are better candidates: they are not tight to the SQL/RDBMS flat scheme.
Finally, this implementation pattern fits much better with a Domain-Driven design. See below.
Therefore, on second thought, having at hand a shared nothing architecture could be a great advantage. Our ORM is already ready to break the table-oriented of SQL. Let us go one step further.
The "has many" and "has many through" relationship we just described does follow the classic process of rows association in a relational database, using a pivot table. This does make sense if you have some DB background, but it is sometimes not worth it.
One drawback of this approach is that the data is split into several tables, and you should carefully take care of data integrity to ensure for instance that when you delete a record, all references to it are also deleted in the associated tables. Our ORM engine will take care of it, but could fail sometimes, especially if you play directly with the tables via SQL, instead of using high-level methods like FillMany*
or DestGetJoined
.
Another potential issue is that one business logical unit is split into several tables, therefore into several diverse TSQLRecord
and TSQLRecordMany
classes. From the ORM point of view, this could be confusing.
Starting with the revision 1.13 of the framework, dynamic arrays, TStrings
and TCollection
can be used as published properties in the TSQLRecord
class definition. This won't be strong enough to implement all possible "Has many" architectures, but could be used in most case, when you need to add a list of records within a particular record, and when this list won't have to be referenced as a stand-alone table.
Dynamic arrays will be stored as BLOB fields in the database, retrieved with Base64 encoding in the JSON transmitted stream, then serialized using the TDynArray
wrapper. Therefore, only Delphi clients will be able to use this field content: you'll loose the AJAX capability of the ORM, at the benefit of better integration with object pascal code. Some dedicated SQL functions have been added to the SQLite engine, like IntegerDynArrayContains
, to search inside this BLOB field content from the WHERE clause of any search (see below). Those functions are available from AJAX queries.
TPersistent / TStrings
and TCollection / TObjectList
will be stored as TEXT fields in the database, following the ObjectToJSON
function format: you can even serialize any TObject
class, via a previous call to the TJSONSerializer. RegisterCustomSerializer
class method - see below - or TObjectList
list of instances, if they are previously registered by TJSONSerializer. RegisterClassForJSON
- see below. This format contains only valid JSON arrays or objects: so it could be un-serialized via an AJAX application, for instance.
About this (trolling?) subject, and why/when you should use plain Delphi objects or arrays instead of classic Master/Detail DB relationship, please read "Objects, not tables" and "ORM is not DB" paragraphs below.
As we just wrote, a first-class candidate for data sharding in a TSQLRecord
is our TDocVariant custom variant type.
You may define:
TSQLRecordData = class(TSQLRecord) private fName: RawUTF8; fData: variant; public published property Name: RawUTF8 read fTest write fTest stored AS_UNIQUE; property Data: variant read fData write fData; end;
Here, we defined two indexed keys, ready to access any data record:
ID: TID
property defined at TSQLRecord
level, which will map the SQLite3 RowID
primary key;Name: RawUTF8
property, which will was marked to be indexed by setting the "stored AS_UNIQUE
" attribute.Then, any kind of data may be stored in the Data: variant
published property. In the database, it will be stored as JSON UTF-8 text, ready to be retrieved from any client, including AJAX / HTML5 applications. Delphi clients or servers will access those data via late-binding, from its TDocVariant
instance.
You just reproduced the schema-less approach of the NoSQL database engines, in a few lines of code! Thanks to the mORMot's below design, your applications are able to store any kind of document, and easily access to them via HTTP.
The documents stored in such a database can have varying sets of fields, with different types for each field. One could have the following objects in a single collection of our Data: variant
rows:
{ name : "Joe", x : 3.3, y : [1,2,3] } { name : "Kate", x : "abc" } { q : 456 }
Of course, when using the database for real problems, the data does have a fairly consistent structure. Something like the following will be more common, e.g. for a table persisting student objects:
{ name : "Joe", age : 30, interests : "football" } { name : "Kate", age : 25 }
Generally, there is a direct analogy between this schema-less style and dynamically typed languages. Constructs such as those above are easy to represent in PHP, Python and Ruby. And, thanks to our TDocVariant
late-binding magic, even our good Delphi is able to handle those structures in our code. What we are trying to do here is make this mapping to the database natural, like:
var aRec: TSQLRecordData; aID: TID; begin // initialization of one record aRec := TSQLRecordData.Create; aRec.Name := 'Joe'; // one unique key aRec.data := _JSONFast('{name:"Joe",age:30}'); // create a TDocVariant // or we can use this overloaded constructor for simple fields aRec := TSQLRecordData.Create(['Joe',_ObjFast(['name','Joe','age',30])]); // now we can play with the data, e.g. via late-binding: writeln(aRec.Name); // will write 'Joe' writeln(aRec.Data); // will write '{"name":"Joe","age":30}' (auto-converted to JSON string) aRec.Data.age := aRec.Data.age+1; // one year older aRec.Data.interests := 'football'; // add a property to the schema aID := aClient.Add(aRec,true); // will store {"name":"Joe","age":31,"interests":"footbal"} aRec.Free; // now we can retrieve the data either via the aID created integer, or via Name='Joe' end;
One of the great benefits of these dynamic objects is that schema migrations become very easy. With a traditional RDBMS, releases of code might contain data migration scripts. Further, each release should have a reverse migration script in case a rollback is necessary. ALTER TABLE
operations can be very slow and result in scheduled downtime.
With a schema-less organization of the data, 90% of the time adjustments to the database become transparent and automatic. For example, if we wish to add GPA to the student objects, we add the attribute, re-save, and all is well - if we look up an existing student and reference GPA, we just get back null. Further, if we roll back our code, the new GPA fields in the existing objects are unlikely to cause problems if our code was well written.
In fact, SQlite3 is so efficient about its indexes B-TREE storage, that such a structure may be used as a credible alternative to much heavier NoSQL engines, like MongoDB or CouchDB.
With the possibility to add some "regular" fields, e.g. plain numbers (like ahead-computed aggregation values), or text (like a summary or description field), you can still use any needed fast SQL query, without the complexity of map/reduce algorithm used by the NoSQL paradigm. You could even use the Full Text Search - FTS3/FTS4/FTS5, see below - or RTREE extension advanced features of SQLite3 to perform your queries. Then, thanks to mORMot's ability to access any external database engine, you are able to perform a JOINed query of your schema-less data with some data stored e.g. in an Oracle, PostgreSQL or MS SQL enterprise database. Or switch later to a true MongoDB storage, in just one line of code - see below.
As we stated, any variant
field will be serialized as JSON, then stored as plain TEXT in the database. In order to make a complex query on the stored JSON, you could retrieve it in your end-user code, then use the corresponding TDocVariant
instance to perform the search on its content. Of course, all this has a noticeable performance cost, especially when the data tend to grow.
The natural way of solving those performance issue is to add some "regular" RDBMS fields, with a proper index, then perform the requests on those fields. But sometimes, you may need to do some addition query, perhaps in conjunction with "regular" field lookup, on the JSON data stored itself.
In order to avoid the slowest conversion to the ORM client side, we defined some SQL functions, dedicated to JSON process.
The first is JsonGet()
, and is able to extract any value from the TEXT field, mapping a variant
:
JsonGet(ArrColumn,0) | returns a property value by index, from a JSON array |
JsonGet(ObjColumn,'PropName') | returns a property value by name, from a JSON object |
JsonGet(ObjColumn,'Obj1.Obj2.Prop') | returns a property value by path, including nested JSON objects |
JsonGet(ObjColumn,'Prop1,Prop2') | extract properties by name, from a JSON object |
JsonGet(ObjColumn,'Prop1,Obj1.Prop') | extract properties by name (including nested JSON objects), from a JSON object |
JsonGet(ObjColumn,'Prop*') | extract properties by wildchar name, from a JSON object |
JsonGet(ObjColumn,'Prop*,Obj1.P*') | extract properties by wildchar name (including nested JSON objects), from a JSON object |
If no value does match, this function will return the SQL NULL
. If the matching value is a simple JSON text or number, it will be returned as a TEXT, INTEGER or DOUBLE value, ready to be passed as a result column or any WHERE clause. If the returned value is a nested JSON object or array, it will be returned as TEXT, serialized as JSON; as a consequence, you may use it as the source of another JsonGet()
function, or even able to gather the results via the CONCAT()
aggregate function.
The comma-separated syntax allowed in the property name parameter (e.g. 'Prop1,Prop2,Prop3'
), will search for several properties at once in a single object, returning a JSON object of all matching values - e.g. '{"Prop2":"Value2","Prop3":123}'
if the Prop1
property did not appear in the stored JSON object.
If you end the property name with a *
character, it will return a JSON object, with all matching properties. Any nested object will have its property names be flattened as {"Obj1.Prop":...}
, within the returned JSON object.
Note that the comma-separated syntax also allows such wildchar search, so that e.g.
JsonGet(ObjColumn,'owner') = {"login":"smith","id":123456} as TEXT JsonGet(ObjColumn,'owner.login') = "smith" as TEXT JsonGet(ObjColumn,'owner.id') = 123456 as INTEGER JsonGet(ObjColumn,'owner.name') = NULL JsonGet(ObjColumn,'owner.login,owner.id') = {"owner.login":"smith","owner.id":123456} as TEXT JsonGet(ObjColumn,'owner.I*') = {"owner.id:123456} as TEXT JsonGet(ObjColumn,'owner.*') = {"owner.login":"smith","owner.id":123456} as TEXT JsonGet(ObjColumn,'unknown.*') = NULL
Another function, named JsonHas()
is similar to JsonGet()
, but will return TRUE or FALSE depending if the supplied property (specified by name or index) do exist. It may be faster to use JsonHas()
than JsonGet()
e.g. in a WHERE clause, when you do not want to process this property value, but only return data rows containing needed information.
JsonHas(ObjColumn,'owner') = true JsonHas(ObjColumn,'owner.login') = true JsonHas(ObjColumn,'owner.name') = false JsonHas(ObjColumn,'owner.i*') = true JsonHas(ObjColumn,'owner.n*') = false
Since the process will take place within the SQLite3 engine itself, and since they use a SAX-like fast approach (without any temporary memory allocation during its search), those JSON functions could be pretty efficient, and proudly compare to some dedicated NoSQL engines.
For instance, here is how the regression tests included in the framework define a TSQLRecord
class with some additional dynamic arrays fields:
TFV = packed record Major, Minor, Release, Build: integer; Main, Detailed: string; end; TFVs = array of TFV; TSQLRecordPeopleArray = class(TSQLRecordPeople) private fInts: TIntegerDynArray; fCurrency: TCurrencyDynArray; fFileVersion: TFVs; fUTF8: RawUTF8; published property UTF8: RawUTF8 read fUTF8 write fUTF8; property Ints: TIntegerDynArray index 1 read fInts write fInts; property Currency: TCurrencyDynArray index 2 read fCurrency write fCurrency; property FileVersion: TFVs index 3 read fFileVersion write fFileVersion; end;
This TSQLRecordPeopleArray
class inherits from TSQLRecordPeople
, so it will add some new UTF8, Ints, Currency
and FileVersion
fields to this root class fields (FirstName, LastName, Data, YearOfBirth, YearOfDeath
).
Some content is added to the PeopleArray
table, with the following code:
var V: TSQLRecordPeople; VA: TSQLRecordPeopleArray; FV: TFV; (...) V2.FillPrepare(Client,'LastName=:(''Dali''):'); n := 0; while V2.FillOne do begin VA.FillFrom(V2); // fast copy some content from TSQLRecordPeople
The FillPrepare
/ FillOne
method are used to loop through all People
table rows with a LastName
column value equal to 'Dali' (with a prepared statement thanks to :( ):
), then initialize a TSQLRecordPeopleArray
instance with those values, using a FillFrom
method call.
inc(n); if n and 31=0 then begin VA.UTF8 := ''; VA.DynArray('Ints').Add(n); Curr := n*0.01; VA.DynArray(2).Add(Curr); FV.Major := n; FV.Minor := n+2000; FV.Release := n+3000; FV.Build := n+4000; str(n,FV.Main); str(n+1000,FV.Detailed); VA.DynArray('FileVersion').Add(FV); end else str(n,VA.fUTF8);
The n
variable is used to follow the PeopleArray
number, and will most of the time set its textual converted value in the UTF8
column, and once per 32 rows, will add one item to both VA
and FV
dynamic array fields.
We could have used normal access to VVA
and FV
dynamic arrays, as such:
SetLength(VA.Ints,length(VA.Ints)+1); VA.Ints[high(VA.Ints)] := n;
But the DynArray
method is used instead, to allow direct access to the dynamic array via a TDynArray
wrapper. Those two lines behave therefore the same as this code:
VA.DynArray('Ints').Add(n);
Note that the DynArray
method can be used via two overloaded set of parameters: either the field name ('Ints'
), or an index
value, as was defined in the class definition. So we could have written:
VA.DynArray(1).Add(n);
since the Ints
published property has been defined as such:
property Ints: TIntegerDynArray index 1 read fInts write fInts;
Similarly, the following line will add a currency
value to the Currency
field:
VA.DynArray(2).Add(Curr);
And a more complex TFV
record is added to the FileVersion
field dynamic array with just one line:
VA.DynArray('FileVersion').Add(FV);
Of course, using the DynArray
method is a bit slower than direct SetLength / Ints[]
use. Using DynArray
with an index should be also a bit faster than using DynArray
with a textual field name (like 'Ints'
), with the benefit of perhaps less keyboard errors at typing the property name. But if you need to fast add a lot of items to a dynamic array, you could use a custom TDynArray
wrapper with an associated external Count
value, or direct access to its content (like SetLength + Ints[]
)
Then the FillPrepare
/ FillOne
loop ends with the following line:
Check(Client.Add(VA,true)=n);
end;
This will add the VA
fields content into the database, creating a new row in the PeopleArray
table, with an ID
following the value of the n
variable. All dynamic array fields will be serialized as BLOB into the database table.
In order to access the BLOB content of the dynamic arrays directly from SQL statements, some new SQL functions have been defined in TSQLDataBase
, named after their native simple types:
ByteDynArrayContains(BlobField,I64)
;WordDynArrayContains(BlobField,I64)
;IntegerDynArrayContains(BlobField,I64)
;CardinalDynArrayContains(BlobField,I64)
;CurrencyDynArrayContains(BlobField,I64)
- in this case, I64
is not the currency
value directly converted into an Int64
value (i.e. not Int64(aCurrency)
), but the binary mapping of the currency
value, i.e. aCurrency*10000
or PInt64(@aCurrency)^
;Int64DynArrayContains(BlobField,I64)
;RawUTF8DynArrayContainsCase(BlobField,'Text')
;RawUTF8DynArrayContainsNoCase(BlobField,'Text')
.Those functions allow direct access to the BLOB content like this:
for i := 1 to n shr 5 do begin k := i shl 5; aClient.OneFieldValues(TSQLRecordPeopleArray,'ID', FormatUTF8('IntegerDynArrayContains(Ints,?)',[],[k]),IDs); Check(length(IDs)=n+1-32*i); for j := 0 to high(IDs) do Check(IDs[j]=k+j); end;
In the above code, the WHERE clause of the OneFieldValues
method will use the dedicated IntegerDynArrayContains
SQL function to retrieve all records containing the specified integer
value k
in its Ints
BLOB column. With such a function, all the process is performed Server-side, with no slow data transmission nor JSON/Base64 serialization.
For instance, using such a SQL function, you are able to store multiple TSQLRecord. ID
field values into one TIntegerDynArray
property column, and have direct search ability inside the SQL statement. This could be a very handy way of implementing "one to many" or "many to many" relationship, without the need of a pivot table.
Those functions were implemented to be very efficient for speed. They won't create any temporary dynamic array during the search, but will access directly to the BLOB raw memory content, as returned by the SQlite engine. The RawUTF8DynArrayContainsCase / RawUTF8DynArrayContainsNoCase
functions also will search directly inside the BLOB. With huge number of requests, this could be slower than using a TSQLRecordMany
pivot table, since the search won't use any index, and will have to read all BLOB field during the request. But, in practice, those functions behave nicely with a relative small amount of data (up to about 50,000 rows). Don't forget that BLOB column access are very optimized in SQlite3.
For more complex dynamic array content handling, you'll have either to create your own SQL function using the TSQLDataBase. RegisterSQLFunction
method and an associated TSQLDataBaseSQLFunction
class, or via a dedicated Service or a stored procedure - see below on how to implement it.
For instance, here is the way regression tests included in the framework define a TSQLRecord
class with some additional TPersistent
, TCollection
or TRawUTF8List
fields (TRawUTF8List
is just a TStringList
-like component, dedicated to handle RawUTF8
kind of string
):
TSQLRecordPeopleObject = class(TSQLRecordPeople) private fPersistent: TCollTst; fUTF8: TRawUTF8List; public constructor Create; override; destructor Destroy; override; published property UTF8: TRawUTF8List read fUTF8; property Persistent: TCollTst read fPersistent; end;
In order to avoid any memory leak or access violation, it is mandatory to initialize then release all internal property instances in the overridden constructor
and destructor
of the class:
constructor TSQLRecordPeopleObject.Create; begin inherited; fPersistent := TCollTst.Create; fUTF8 := TRawUTF8List.Create; end;
destructor TSQLRecordPeopleObject.Destroy; begin inherited; FreeAndNil(fPersistent); FreeAndNil(fUTF8); end;
Here is how the regression tests are performed:
var VO: TSQLRecordPeopleObject; (...) if Client.TransactionBegin(TSQLRecordPeopleObject) then try V2.FillPrepare(Client,'LastName=?',['Morse']); n := 0; while V2.FillOne do begin VO.FillFrom(V2); // fast copy some content from TSQLRecordPeople inc(n); VO.Persistent.One.Color := n+100; VO.Persistent.One.Length := n; VO.Persistent.One.Name := Int32ToUtf8(n); if n and 31=0 then begin VO.UTF8.Add(VO.Persistent.One.Name); with VO.Persistent.Coll.Add do begin Color := n+1000; Length := n*2; Name := Int32ToUtf8(n*3); end; end; Check(Client.Add(VO,true)=n); end; Client.Commit; except Client.RollBack; // in case of error end;
This will add 1000 rows to the PeopleObject
table.
First of all, the adding is nested inside a transaction call, to speed up SQL INSERT
statements, via TransactionBegin
and Commit
methods. Please note that the TransactionBegin
method returns a boolean
value, and should be checked in a multi-threaded or Client-Server environment (in this part of the test suit, content is accessed in the same thread, so checking the result is not mandatory, but shown here for accuracy). In the current implementation of the framework, transactions should not be nested. The typical transaction usage should be the following:
if Client.TransactionBegin(TSQLRecordPeopleObject) then try //.... modify the database content, raise exceptions on error Client.Commit; except Client.RollBack; // in case of error end;
In a Client-Server environment with multiple Clients connected at the same time, you can use the dedicated TSQLRestClientURI.TransactionBeginRetry
method:
if Client.TransactionBeginRetry(TSQLRecordPeopleObject,20) then ...
Note that the transactions are handled according to the corresponding client session: the client should make the transaction block as short as possible (e.g. using a batch command), since any write attempt by other clients will wait for the transaction to be released (with either a commit or rollback).
The fields inherited from the TSQLRecord
class are retrieved via FillPrepare
/ FillOne
method calls, for columns with the LastName
matching 'Morse'
. One TPersistent
property instance values are set (VO.Persistent.One
), then, for every 32 rows, a new item is added to the VO.Persistent.Coll
collection.
Here is the data sent for instance to the Server, when the item with ID=32
is added:
{"FirstName":"Samuel Finley Breese31", "LastName":"Morse", "YearOfBirth":1791, "YearOfDeath":1872, "UTF8":["32"], "Persistent":{"One":{"Color":132,"Length":32,"Name":"32"},"Coll":[{"Color":1032,"Length":64,"Name":"96"}]} }
Up to revision 1.15 of the framework, the transmitted JSON content was not a true JSON object, but sent as RawUTF8
TEXT values (i.e. every double-quote ("
) character is escaped as " - e.g. "UTF8":"["32"]"
). Starting with revision 1.16 of the framework, the transmitted data is a true JSON object, to allow better integration with an AJAX client. That is, UTF8
field is transmitted as a valid JSON array of string, and Persistent
as a valid JSON object with nested objects and arrays.
When all 1000 rows were added to the database file, the following loop is called once with direct connection to the DB engine, once with a remote client connection (with all available connection protocols):
for i := 1 to n do begin VO.ClearProperties; Client.Retrieve(i,VO); Check(VO.ID=i); Check(VO.LastName='Morse'); Check(VO.UTF8.Count=i shr 5); for j := 0 to VO.UTF8.Count-1 do Check(GetInteger(pointer(VO.UTF8[j]))=(j+1) shl 5); Check(VO.Persistent.One.Length=i); Check(VO.Persistent.One.Color=i+100); Check(GetInteger(pointer(VO.Persistent.One.Name))=i); Check(VO.Persistent.Coll.Count=i shr 5); for j := 0 to VO.Persistent.Coll.Count-1 do with VO.Persistent.Coll[j] do begin k := (j+1) shl 5; Check(Color=k+1000); Check(Length=k*2); Check(GetInteger(pointer(Name))=k*3); end; end;
All the magic is made in the Client.Retrieve(i,VO)
method. Data is retrieved from the database as TEXT values, then un-serialized from JSON arrays or objects into the internal TRawUTF8List
and TPersistent
instances.
When the ID=33
row is retrieved, the following JSON content is received from the server:
{"ID":33, "FirstName":"Samuel Finley Breese32", "LastName":"Morse", "YearOfBirth":1791, "YearOfDeath":1872, "UTF8":"[\"32\"]", "Persistent":"{\"One\":{\"Color\":133,\"Length\":33,\"Name\":\"33\"},\"Coll\":[{\"Color\":1032,\"Length\":64,\"Name\":\"96\"}]}"}
In contradiction with POST content, this defines no valid nested JSON objects nor arrays, but UTF8
and Persistent
fields transmitted as JSON strings. This is a known limitation of the framework, due to the fact that it is much faster to retrieve directly the text from the database than process for this operation. For an AJAX application, this won't be difficult to use a temporary string
property, and evaluate the JSON content from it, in order to replace the property with a corresponding object content. Implementation may change in the future.
Not only TPersistent, TCollection
and TSQLRecord
types can be serialized by writing all published
properties. The ORM core of mORMot uses ObjectToJSON()
and JSONToObject()
(aka TJSONSerializer.WriteObject
) functions to process proper JSON serialization.
You have two methods to register JSON serialization for any kind of class:
TJSONSerializer. RegisterCustomSerializer
below;TObjectList
instances, after a proper call to TJSONSerializer. RegisterClassForJSON
below.In the database, such kind of objects will be stored as TEXT (serialized as JSON), and transmitted as regular JSON objects or arrays when working in Client-Server mode.
This "Shared nothing architecture" matches perfectly with the NoSQL and Object-Document Mapping (ODM) design.
In fact, mORMot's integration with MongoDB has been optimized so that any of those high-level properties (like dynamic arrays, variants and TDocVariant, or any class
) will be stored as BSON documents on the MongoDB server.
If those types are able to be serialized as JSON - which is the case for simple types, variants
and for any dynamic array / record custom types - see below, then the mORMotDB.pas
unit will store this data as BSON objects or arrays on the server side, and not as BLOB or JSON text (as with SQL back-ends). You will be able to query by name any nested sub-document or sub-array, in the MongoDB collection.
As such, data sharing with mORMot will benefit of RDBMS back-end, as a reliable and proven solution, but also of the latest NoSQL technology.
Data sharding just feels natural, from the ORM point of view.
But defining a pivot table is a classic and powerful use of relational database, and will unleash