Exampville Notebook

This Jupyter Notebook demostrates a part of a typical workflow for a transportation model.

Simulating the Travel Survey

We begin by generating a simulated travel survey for the people of Exampville. The latest version of Larch includes a helpful simulation tool, which allows us to scale the size of Examplville to the amount of time we want to spend processing the example (and the power of our computer). Although we can expect better model estimates when the sample size is larger, we’re mostly concerned with the technical operation of Larch and not with getting the best model fit we can, so we’ll simulate only a small amount of data.

In [1]:
import larch, numpy, pandas, os
from larch.roles import P,X
from larch.examples.exampville import builder
numpy.set_printoptions(linewidth=160)

Larch 3.3.19.6110808d

[08:39:05 AM]root: Connected log to stream <ipykernel.iostream.OutStream object at 0x102f41c50>
In [2]:
nZones = 15
transit_scope = (4,15)
n_HH = 2000

directory, omx, f_hh, f_pp, f_tour = builder(
    nZones=nZones,
    transit_scope=transit_scope,
    n_HH=n_HH,
)

f_hh.export_idco(os.path.join(directory,'exampville_hh.csv'))
f_pp.export_idco(os.path.join(directory,'exampville_person.csv'))
f_tour.export_idco(os.path.join(directory,'exampville_tours.csv'))
[08:39:05 AM]larch.exampville: EXAMPVILLE Builder (Year 1)
[08:39:05 AM]larch.exampville:   simulating a survey of 2000 households
[08:39:05 AM]larch.exampville:   traveling among 15 travel analysis zones
[08:39:05 AM]larch.exampville: Zones
[08:39:05 AM]larch.exampville: Skims
[08:39:05 AM]larch.exampville: HHs
[08:39:05 AM]larch.exampville: People
[08:39:05 AM]larch.exampville: Tours
[08:39:05 AM]larch.exampville: Choice Probability
[08:39:06 AM]larch.exampville: Choices
[08:39:06 AM]larch.exampville: Output
[08:39:06 AM]larch.exampville: EXAMPVILLE Completed Builder (Year 1)
[08:39:06 AM]larch.exampville:    SKIMS  : /var/folders/c3/sgn_gh2j3kv37txqvspbnyzw0000gn/T/tmp7ya6g3ip/exampville.omx
[08:39:06 AM]larch.exampville:    HHs    : /var/folders/c3/sgn_gh2j3kv37txqvspbnyzw0000gn/T/tmp7ya6g3ip/exampville_hh.h5
[08:39:06 AM]larch.exampville:    Persons: /var/folders/c3/sgn_gh2j3kv37txqvspbnyzw0000gn/T/tmp7ya6g3ip/exampville_person.h5
[08:39:06 AM]larch.exampville:    Tours  : /var/folders/c3/sgn_gh2j3kv37txqvspbnyzw0000gn/T/tmp7ya6g3ip/exampville_tours.h5
In [3]:
omx
Out[3]:
<larch.OMX> /var/folders/c3/sgn_gh2j3kv37txqvspbnyzw0000gn/T/tmp7ya6g3ip/exampville.omx
 |  shape:(15, 15)
 |  data:
 |    AUTO_TIME (float64)
 |    DIST      (float64)
 |    RAIL_FARE (float64)
 |    RAIL_TIME (float64)
 |  lookup:
 |    EMPLOYMENT    (15 float64)
 |    EMP_NONRETAIL (15 float64)
 |    EMP_RETAIL    (15 float64)
 |    LAT           (15 int64)
 |    LON           (15 int64)
 |    TAZID         (15 int64)
In [4]:
f_hh
Out[4]:
<larch.DT> …/T/tmp7ya6g3ip/exampville_hh.h5
 |  > file is opened for read/write <
 |  nCases: 2000
 |  nAlts: <missing>
 |  idco:
 |    HHSIZE        int64
 |    HOMETAZ       int64
 |    INCOME        int64
In [5]:
f_pp
Out[5]:
<larch.DT> …/T/tmp7ya6g3ip/exampville_person.h5
 |  > file is opened for read/write <
 |  nCases: 3462
 |  nAlts: <missing>
 |  idco:
 |    AGE           int64
 |    HHID          int64
 |    N_OTHERTOURS  int64
 |    N_TOTALTOURS  int64
 |    N_WORKTOURS   int64
 |    WORKS         int64
In [6]:
f_tour
Out[6]:
<larch.DT> …/T/tmp7ya6g3ip/exampville_tours.h5
 |  > file is opened for read/write <
 |  nCases: 6123
 |  nAlts: <missing>
 |  idco:
 |    DTAZ          int64
 |    HHID          int64
 |    PERSONID      int64
 |    TOURMODE      int64
 |    TOURPURP      int64

Compiling Mode Choice Data

The Exampville data output contains a set of files similar to what we might find for a real travel survey: network skims, and tables of households, persons, and tours. We’ll need to merge these tables to create a composite dataset for mode choice model estimation.

In [7]:
flog = larch.logging.flogger('ModeData')


### MODE CHOICE DATA

# Define numbers and names for modes
DA = 1
SR = 2
Walk = 3
Bike = 4
Transit = 5

d = larch.DT()
# By omitting a filename here, we create a temporary HDF5 store.

d.set_alternatives( [DA,SR,Walk,Bike,Transit], ['DA','SR','Walk','Bike','Transit'] )

flog('merging survey datasets')
d.new_caseids( f_tour.caseids() )
d.merge_into_idco(f_tour, "caseid")
d.merge_into_idco(f_pp, "PERSONID")
d.merge_into_idco(f_hh, "HHID")

flog('merging skims')
# Create a new variables with zero-based home TAZ numbers
d.new_idco("HOMETAZi", "HOMETAZ-1", dtype=int)
d.new_idco("DTAZi", "DTAZ-1", dtype=int)

# Pull in plucked data from Matrix file
d.pluck_into_idco(omx, "HOMETAZi", "DTAZi")
# This command is new as of Larch 3.3.15
# It loads all the matrix DATA from an OMX based on OTAZ and DTAZ
# columns that already exist in the DT

flog('prep data')
d.choice_idco = {
    DA: 'TOURMODE==1',
    SR: 'TOURMODE==2',
    Walk: 'TOURMODE==3',
    Bike: 'TOURMODE==4',
    Transit: 'TOURMODE==5',
}
# Alternately:   d.choice_idco = {i:'TOURMODE=={}'.format(i) for i in [1,2,3,4,5]}



d.avail_idco = {
    DA: '(AGE>=16)',
    SR: '1',
    Walk: 'DIST<=3',
    Bike: 'DIST<=15',
    Transit: 'RAIL_TIME>0',
}

# Let's define some variables for clarity.
d.new_idco("WALKTIME", "DIST / 2.5 * 60 * (DIST<=3)") # 2.5 mph, 60 minutes per hour, max 3 miles
d.new_idco("BIKETIME", "DIST / 12 * 60 * (DIST<=15)")  # 12 mph, 60 minutes per hour, max 15 miles
d.new_idco("CARCOST", "DIST * 0.20")  # 20 cents per mile

d.info(1)
[08:39:06 AM]larch: merging survey datasets
[08:39:06 AM]larch: merging skims
[08:39:06 AM]larch: prep data
Out[7]:

<larch.DT> tmpgnonj_w5.h5f

VariabledtypeShapeOriginal Source
idco
AGEint64(6123,)/var/folders/c3/sgn_gh2j3kv37txqvspbnyzw0000gn/T/tmp7ya6g3ip/exampville_person.h5
AUTO_TIMEfloat64(6123,)/var/folders/c3/sgn_gh2j3kv37txqvspbnyzw0000gn/T/tmp7ya6g3ip/exampville.omx
BIKETIMEfloat64(6123,)
CARCOSTfloat64(6123,)
DISTfloat64(6123,)/var/folders/c3/sgn_gh2j3kv37txqvspbnyzw0000gn/T/tmp7ya6g3ip/exampville.omx
DTAZint64(6123,)/var/folders/c3/sgn_gh2j3kv37txqvspbnyzw0000gn/T/tmp7ya6g3ip/exampville_tours.h5
DTAZiint64(6123,)
HHIDint64(6123,)/var/folders/c3/sgn_gh2j3kv37txqvspbnyzw0000gn/T/tmp7ya6g3ip/exampville_tours.h5
HHSIZEint64(6123,)/var/folders/c3/sgn_gh2j3kv37txqvspbnyzw0000gn/T/tmp7ya6g3ip/exampville_hh.h5
HOMETAZint64(6123,)/var/folders/c3/sgn_gh2j3kv37txqvspbnyzw0000gn/T/tmp7ya6g3ip/exampville_hh.h5
HOMETAZiint64(6123,)
INCOMEint64(6123,)/var/folders/c3/sgn_gh2j3kv37txqvspbnyzw0000gn/T/tmp7ya6g3ip/exampville_hh.h5
N_OTHERTOURSint64(6123,)/var/folders/c3/sgn_gh2j3kv37txqvspbnyzw0000gn/T/tmp7ya6g3ip/exampville_person.h5
N_TOTALTOURSint64(6123,)/var/folders/c3/sgn_gh2j3kv37txqvspbnyzw0000gn/T/tmp7ya6g3ip/exampville_person.h5
N_WORKTOURSint64(6123,)/var/folders/c3/sgn_gh2j3kv37txqvspbnyzw0000gn/T/tmp7ya6g3ip/exampville_person.h5
PERSONIDint64(6123,)/var/folders/c3/sgn_gh2j3kv37txqvspbnyzw0000gn/T/tmp7ya6g3ip/exampville_tours.h5
RAIL_FAREfloat64(6123,)/var/folders/c3/sgn_gh2j3kv37txqvspbnyzw0000gn/T/tmp7ya6g3ip/exampville.omx
RAIL_TIMEfloat64(6123,)/var/folders/c3/sgn_gh2j3kv37txqvspbnyzw0000gn/T/tmp7ya6g3ip/exampville.omx
TOURMODEint64(6123,)/var/folders/c3/sgn_gh2j3kv37txqvspbnyzw0000gn/T/tmp7ya6g3ip/exampville_tours.h5
TOURPURPint64(6123,)/var/folders/c3/sgn_gh2j3kv37txqvspbnyzw0000gn/T/tmp7ya6g3ip/exampville_tours.h5
WALKTIMEfloat64(6123,)
WORKSint64(6123,)/var/folders/c3/sgn_gh2j3kv37txqvspbnyzw0000gn/T/tmp7ya6g3ip/exampville_person.h5
caseidint64(6123,)/var/folders/c3/sgn_gh2j3kv37txqvspbnyzw0000gn/T/tmp7ya6g3ip/exampville_tours.h5
idca
_avail_<stack><stack>
_choice_<stack><stack>

Mode Choice Model Estimation

In [8]:
flog = larch.logging.flogger('ModeModel')

d.exclude_idco("TOURPURP != 1")
flog("nCases={}", d.nCases())

### MODE CHOICE MODEL

m = larch.Model(d)
m.title = "Exampville Work Tour Mode Choice v1"

# In order to create a shadow (alias) parameter, the source parameter must already exist,
# so we create it explicitly.
m.parameter("InVehTime")

# Then we can create the shadow, as a multiple of the original parameter.
m.shadow_parameter.NonMotorTime = P.InVehTime * 2



m.utility.co[DA] = (
    + P.InVehTime * X.AUTO_TIME
    + P.Cost * X.CARCOST # dollars per mile
)

m.utility[SR] = (
    + P.ASC_SR
    + P.InVehTime * X.AUTO_TIME
    + P.Cost * (X.CARCOST * 0.5) # dollars per mile, half share
    + P("HighInc:SR") * X("INCOME>75000")
)

m.utility[Walk] = (
    + P.ASC_Walk
    + P.NonMotorTime * X.WALKTIME
    + P("HighInc:Walk") * X("INCOME>75000")
)

m.utility[Bike] = (
    + P.ASC_Bike
    + P.NonMotorTime * X.BIKETIME
    + P("HighInc:Bike") * X("INCOME>75000")
)

m.utility[Transit] = (
    + P.ASC_Transit
    + P.InVehTime * X.RAIL_TIME
    + P.Cost * X.RAIL_FARE
    + P("HighInc:Transit") * X("INCOME>75000")
)

Car = m.new_nest('Nest:Car', children=[DA,SR])
NonMotor = m.new_nest('Nest:NonMotor', children=[Walk,Bike])
Motor = m.new_nest('Nest:Motorized', children=[Car,Transit])

from larch.util.categorize import Categorizer

m.parameter_groups = (
    Categorizer("Level of Service",
        ".*Time.*",
        ".*Cost.*",
    ),
    Categorizer("Alternative Specific Constants",
        "ASC.*",
    ),
    Categorizer("Income",
        ".*HighInc.*",
        ".*LowInc.*",
    ),
    Categorizer("Logsum Parameters",
        "Nest.*",
    ),
)

m.maximize_loglike()

[08:39:07 AM]larch: nCases=1897
[08:39:07 AM]larch.Model: Provisioning Avail data...
[08:39:07 AM]larch.Model: Provisioning Choice data...
[08:39:07 AM]larch.Model: Provisioning UtilityCO data...
[08:39:07 AM]larch.Model: Provisioning Weight data...
[08:39:07 AM]larch.Model: Model.setUp...
[08:39:07 AM]larch.Model: Model.setUp complete
[08:39:07 AM]larch.Model: Using <larch.util.optimize.OptimizeTechnique SLSQP>
[08:39:07 AM]larch.Model: Optimization terminated successfully. [SLSQP]
[08:39:07 AM]larch.Model: Ending leg (success) using SLSQP at [-0.11274266  0.74170258  0.86969656  0.84439805 -0.34073929 -1.3943154  -1.75623615  0.73177238 -0.79580814 -1.66675159 -1.0525934  -1.67706803 -1.24100479]
[08:39:08 AM]larch.Model: Preliminary Results
+-----------------+---------------+----------+------+----------+
 Parameter         Estimated Value Std Error  t-Stat Null Value
+-----------------+---------------+----------+------+----------+

= Level of Service =============================================
 InVehTime         -0.1127          nan        nan    0
 NonMotorTime      -0.2255         = InVehTime * 2
 Cost              -0.3407          nan        nan    0
= Alternative Specific Constants ===============================
 ASC_SR            -1.394           nan        nan    0
 ASC_Walk           0.7318          nan        nan    0
 ASC_Bike          -1.667           nan        nan    0
 ASC_Transit       -1.677           nan        nan    0
= Income =======================================================
 HighInc SR        -1.756           nan        nan    0
         Walk      -0.7958          nan        nan    0
         Bike      -1.053           nan        nan    0
         Transit   -1.241           nan        nan    0
= Logsum Parameters ============================================
 Nest    Car        0.7417          nan        nan    1
         NonMotor   0.8697          nan        nan    1
         Motorized  0.8444          nan        nan    1
+-----------------+---------------+----------+------+----------+
[08:39:08 AM]larch.Model: Final Results
+-----------------+---------------+----------+------+----------+
 Parameter         Estimated Value Std Error  t-Stat Null Value
+-----------------+---------------+----------+------+----------+

= Level of Service =============================================
 InVehTime         -0.1127          0.01356   -8.31   0
 NonMotorTime      -0.2255         = InVehTime * 2
 Cost              -0.3407          0.1812    -1.88   0
= Alternative Specific Constants ===============================
 ASC_SR            -1.394           0.9541    -1.46   0
 ASC_Walk           0.7318          0.3015     2.43   0
 ASC_Bike          -1.667           0.2739    -6.09   0
 ASC_Transit       -1.677           0.4977    -3.37   0
= Income =======================================================
 HighInc SR        -1.756           1.29      -1.36   0
         Walk      -0.7958          0.4189    -1.90   0
         Bike      -1.053           0.4631    -2.27   0
         Transit   -1.241           0.4508    -2.75   0
= Logsum Parameters ============================================
 Nest    Car        0.7417          0.5384    -0.48   1
         NonMotor   0.8697          0.1835    -0.71   1
         Motorized  0.8444          0.2487    -0.63   1
+-----------------+---------------+----------+------+----------+
Out[8]:
messages: Optimization terminated successfully. [SLSQP]:
                  |     SLSQP:Optimization terminated successfully.
              ctol: 1.6784867506126543e-10
               fun: 963.8426415309916
  installed_memory: '24.0 GiB'
               jac: array([ -3.62712208e-04,  -2.89700764e-04,  -2.03759665e-05,  -3.39212456e-05,   1.10963310e-05,  -1.03176781e-04,  -4.70217123e-05,  -1.15828714e-06,
                  |          3.85890819e-06,  -1.88193682e-05,  -5.68614980e-06,  -4.56550779e-06,  -9.30138210e-06,   0.00000000e+00])
           loglike: -963.8426415309916
      loglike_null: -2592.698004024909
           message: 'Optimization terminated successfully. [SLSQP]'
               nit: 42
             niter: [('SLSQP', 42)]
 peak_memory_usage: '134.1171875 MiB'
             stats: <larch.core.runstats, success in 0:00.51>
            status: 0
           success: True
                 x: array([-0.11274266,  0.74170258,  0.86969656,  0.84439805, -0.34073929, -1.3943154 , -1.75623615,  0.73177238, -0.79580814, -1.66675159, -1.0525934 ,
                  |        -1.67706803, -1.24100479])
In [9]:
## Save results to a report
m.xhtml_report(filename=os.path.join(directory,'mode_choice_model.html'), cats='**')



## View Mode Choice Model Results
# Here we lump together a bunch of report sections into a single report that will
# display neatly in a jupyter notebook.
sections = ['params','ll',
 'nesting_tree',
 'latest','UTILITYSPEC','PROBABILITYSPEC',
 'DATA',
 'excludedcases','NOTES','options',
 'possible_overspecification']
m.jupyter(*sections)

Model Parameter Estimates

ParameterEstimated ValueStd Errort-StatNull Value
Level of Service
InVehTime-0.1127 0.01356 -8.31 0
NonMotorTime-0.2255 = InVehTime * 2
Cost-0.3407 0.1812 -1.88 0
Alternative Specific Constants
ASC_SR-1.394 0.9541 -1.46 0
ASC_Walk 0.7318 0.3015 2.43 0
ASC_Bike-1.667 0.2739 -6.09 0
ASC_Transit-1.677 0.4977 -3.37 0
Income
HighIncSR-1.756 1.29 -1.36 0
Walk-0.7958 0.4189 -1.90 0
Bike-1.053 0.4631 -2.27 0
Transit-1.241 0.4508 -2.75 0
Logsum Parameters
NestCar 0.7417 0.5384 -0.48 1
NonMotor 0.8697 0.1835 -0.71 1
Motorized 0.8444 0.2487 -0.63 1

Model Estimation Statistics

StatisticAggregatePer Case
Number of Cases1897
Log Likelihood at Convergence-963.84-0.51
Log Likelihood at Null Parameters-2592.70-1.37
Rho Squared w.r.t. Null Parameters0.628

Nesting Structure

Tree cluster_elemental Elemental Alternatives 1 DA (1) 2 SR (2) 3 Walk (3) 4 Bike (4) 5 Transit (5) 0 Root 7 Nest:NonMotor (7) 0->7 8 Nest:Motorized (8) 0->8 7->3 7->4 8->5 6 Nest:Car (6) 8->6 6->1 6->2

Latest Estimation Run Statistics

Estimation DateTuesday, November 08 2016, 08:39:07 AM
Resultssuccess
MessageOptimization terminated successfully. [SLSQP]
Optimization MethodSLSQP
Number of Iterations42
Running TimeTotal0.509 seconds
setup0:00.08
null_likelihood0:00
weight choice rebalance0:00
weight autorescale0:00
optimize:SLSQP0:00.33
parameter covariance0:00.09
cleanup0:00
Notesdid not automatically rescale weights
Computerpaladin.thunderdome
ProcessorIntel(R) Core(TM) i7-6700K CPU @ 4.00GHz
Number of CPU Cores8
Number of Threads Used8
Installed Memory24.0 GiB
Peak Memory Usage134.1171875 MiB

Utility Specification

Resolved Utility

CodeAlternativeResolved Utility
1DA- 0.1127*AUTO_TIME
- 0.3407*CARCOST
2SR- 1.394
- 0.1127*AUTO_TIME
- 0.3407*CARCOST*(0.5)
- 1.756*INCOME>75000
3Walk  0.7318
- 0.2255*WALKTIME
- 0.7958*INCOME>75000
4Bike- 1.667
- 0.2255*BIKETIME
- 1.053*INCOME>75000
5Transit- 1.677
- 0.1127*RAIL_TIME
- 0.3407*RAIL_FARE
- 1.241*INCOME>75000
CodeNestResolved Utility
6Nest:Car0.7417 * log( exp(Utility[DA]/0.7417) + exp(Utility[SR]/0.7417) )
7Nest:NonMotor0.8697 * log( exp(Utility[Walk]/0.8697) + exp(Utility[Bike]/0.8697) )
8Nest:Motorized0.8444 * log( exp(Utility[Transit]/0.8444) + exp(Utility[Nest:Car]/0.8444) )
0ROOTlog( exp(Utility[Nest:Motorized]) + exp(Utility[Nest:NonMotor]) )

Formulaic Utility

CodeAlternativeFormulaic Utility
1DA  InVehTime*AUTO_TIME
+ Cost*CARCOST
2SR  ASC_SR
+ InVehTime*AUTO_TIME
+ Cost*CARCOST*(0.5)
+ HighInc:SR*INCOME>75000
3Walk  ASC_Walk
+ NonMotorTime*WALKTIME
+ HighInc:Walk*INCOME>75000
4Bike  ASC_Bike
+ NonMotorTime*BIKETIME
+ HighInc:Bike*INCOME>75000
5Transit  ASC_Transit
+ InVehTime*RAIL_TIME
+ Cost*RAIL_FARE
+ HighInc:Transit*INCOME>75000
CodeNestFormulaic Utility
6Nest:CarNest:Car * log( exp(Utility[DA]/Nest:Car) + exp(Utility[SR]/Nest:Car) )
7Nest:NonMotorNest:NonMotor * log( exp(Utility[Walk]/Nest:NonMotor) + exp(Utility[Bike]/Nest:NonMotor) )
8Nest:MotorizedNest:Motorized * log( exp(Utility[Transit]/Nest:Motorized) + exp(Utility[Nest:Car]/Nest:Motorized) )
0ROOTlog( exp(Utility[Nest:Motorized]) + exp(Utility[Nest:NonMotor]) )

Probability Specification

Resolved Probability

CodeAlternativeResolved Probability
1DAexp(Utility[DA]/0.7417)/exp(Utility[Nest:Car]/0.7417) * exp(Utility[Nest:Car]/0.8444)/exp(Utility[Nest:Motorized]/0.8444) * exp(Utility[Nest:Motorized])/exp(Utility[ROOT])
2SRexp(Utility[SR]/0.7417)/exp(Utility[Nest:Car]/0.7417) * exp(Utility[Nest:Car]/0.8444)/exp(Utility[Nest:Motorized]/0.8444) * exp(Utility[Nest:Motorized])/exp(Utility[ROOT])
3Walkexp(Utility[Walk]/0.8697)/exp(Utility[Nest:NonMotor]/0.8697) * exp(Utility[Nest:NonMotor])/exp(Utility[ROOT])
4Bikeexp(Utility[Bike]/0.8697)/exp(Utility[Nest:NonMotor]/0.8697) * exp(Utility[Nest:NonMotor])/exp(Utility[ROOT])
5Transitexp(Utility[Transit]/0.8444)/exp(Utility[Nest:Motorized]/0.8444) * exp(Utility[Nest:Motorized])/exp(Utility[ROOT])

Formulaic Probability

CodeAlternativeFormulaic Probability
1DAexp(Utility[DA]/Nest:Car)/exp(Utility[Nest:Car]/Nest:Car) * exp(Utility[Nest:Car]/Nest:Motorized)/exp(Utility[Nest:Motorized]/Nest:Motorized) * exp(Utility[Nest:Motorized])/exp(Utility[ROOT])
2SRexp(Utility[SR]/Nest:Car)/exp(Utility[Nest:Car]/Nest:Car) * exp(Utility[Nest:Car]/Nest:Motorized)/exp(Utility[Nest:Motorized]/Nest:Motorized) * exp(Utility[Nest:Motorized])/exp(Utility[ROOT])
3Walkexp(Utility[Walk]/Nest:NonMotor)/exp(Utility[Nest:NonMotor]/Nest:NonMotor) * exp(Utility[Nest:NonMotor])/exp(Utility[ROOT])
4Bikeexp(Utility[Bike]/Nest:NonMotor)/exp(Utility[Nest:NonMotor]/Nest:NonMotor) * exp(Utility[Nest:NonMotor])/exp(Utility[ROOT])
5Transitexp(Utility[Transit]/Nest:Motorized)/exp(Utility[Nest:Motorized]/Nest:Motorized) * exp(Utility[Nest:Motorized])/exp(Utility[ROOT])

Choice and Availability

CodeAlternative# Avail# ChosenAvailability Condition
1DA 1897 1573 (AGE>=16)
2SR 1897 171 1
3Walk 662 47 DIST<=3
4Bike 1882 28 DIST<=15
5Transit 1245 78 RAIL_TIME>0

Excluded Cases

CriteriaData Source# Cases Excluded# Cases Remaining
0TOURPURP != 1idco42261897

Options

authorjpn
autocreate_parametersTrue
calc_null_likelihoodTrue
calc_std_errorsTrue
enforce_boundsTrue
enforce_constraintsTrue
enforce_network_constraintsFalse
force_finite_diff_gradFalse
force_recalculateFalse
gradient_diagnostic0
hessian_diagnostic0
idca_avail_ratio_floor0.1
ignore_bad_constraintsFalse
log_turnsFalse
mute_nan_warningsTrue
null_disregards_holdfastTrue
save_db_hashFalse
suspend_xylem_rebuildFalse
teardown_after_estimateTrue
threads8
weight_autorescaleTrue
weight_choice_rebalanceTrue
<matplotlib.figure.Figure at 0x11db1bcf8>
In [10]:
m.jupyter.choice_distributions

Choice Distributions by Data Attributes

VariableValue[s]DASRWalkBikeTransitCount
AUTO_TIME> 8.683.31%10.81%0.00%0.16%5.72%629.0
4.1 to 8.686.88%9.72%0.00%0.77%2.62%648.0
< 4.178.39%6.45%7.58%3.55%4.03%620.0
CARCOST> 1.384.31%11.89%0.00%0.00%3.80%631.0
0.39 to 1.386.80%8.50%0.00%0.61%4.10%659.0
< 0.3977.27%6.59%7.74%3.95%4.45%607.0
CARCOST*(0.5)> 0.6484.31%11.89%0.00%0.00%3.80%631.0
0.2 to 0.6486.80%8.50%0.00%0.61%4.10%659.0
< 0.277.27%6.59%7.74%3.95%4.45%607.0
INCOME>75000Yes94.20%1.73%1.60%0.74%1.73%810.0
No74.52%14.44%3.13%2.02%5.89%1087.0
WALKTIME> 3785.78%7.80%0.00%2.29%4.13%218.0
23 to 3784.82%8.48%0.45%2.23%4.02%224.0
< 2363.64%4.55%20.91%6.36%4.55%220.0
= 085.51%10.12%0.00%0.32%4.05%1235.0
BIKETIME> 3184.22%12.08%0.00%0.00%3.70%621.0
9.8 to 3186.70%8.56%0.00%0.61%4.13%654.0
< 9.877.27%6.59%7.74%3.95%4.45%607.0
= 093.33%0.00%0.00%0.00%6.67%15.0
RAIL_TIME> 7.785.17%7.93%0.00%0.00%6.91%391.0
2.6 to 7.783.00%8.83%0.00%1.32%6.84%453.0
< 2.675.56%6.23%8.73%4.49%4.99%401.0
= 086.04%11.50%1.84%0.61%0.00%652.0
RAIL_FARE0.086.04%11.50%1.84%0.61%0.00%652.0
1.581.29%7.71%2.81%1.93%6.27%1245.0
In [11]:
m.jupyter.datasummary

Various Data Statistics

Utility idCO Data

DataMeanStd.Dev.MinimumMaximumZerosMean(NonZero)PositivesDistribution
AUTO_TIME7.71985.637249.67807.71981897
CARCOST0.977960.719280.00404373.62100.977961897
11011011897
CARCOST*(0.5)0.488980.359640.00202181.810500.488981897
INCOME>750000.426990.494640110871810
WALKTIME10.38617.018071.245123529.763662
BIKETIME23.81917.419073.8711524.0091882
RAIL_TIME4.14265.4032028.8216526.3121245
RAIL_FARE0.984450.7124101.56521.51245
Graphs are represented as pie charts if the data element has 4 or fewer distinct values.
Histograms are green if the displayed range truncates some extreme outliers.
Histograms are orange if the zeros are numerous and have been excluded.

Utility idCO Data by Alternative

AlternativeDataFilterMeanStd.Dev.MinimumMaximumMean (Nonzeros)# Zeros# PositivesDistribution
DAAUTO_TIMEChosen7.83475.6072.049.678258182948027.834701573
Unchosen7.16215.74782.033.842809900765487.16210324
CARCOSTChosen1.00370.706750.0040436794880651443.62103154879923971.003701573
Unchosen0.852770.7650.0040436794880651443.027323952332080.852770324
SRAUTO_TIMEChosen8.41955.30672.031.6927577250698438.41950171
Unchosen7.65055.6642.049.678258182948027.650501726
CARCOST*(0.5)Chosen0.554140.35060.0118274425868933231.47742213143196490.554140171
Unchosen0.482530.359880.0020218397440325721.81051577439961990.4825301726
INCOME>75000Chosen0.0818710.274170.01.0115714
Unchosen0.461180.498490.01.01930796
WalkINCOME>75000Chosen0.27660.447310.01.013413
Unchosen0.403250.490550.01.01367248
WALKTIMEChosen3.85684.46280.485241538567817223.4868402135863373.8568047
Unchosen31.74314.690.485241538567817271.2449115760592431.7430615
BikeINCOME>75000Chosen0.214290.410330.01.01226
Unchosen0.430420.495140.01.011056798
BIKETIMEChosen7.22676.81860.101091987201628625.1126805956974877.2267028
Unchosen24.26317.3440.101091987201628673.8711065715982324.26301854
TransitINCOME>75000Chosen0.179490.383760.01.016414
Unchosen0.437020.496020.01.01657510
RAIL_TIMEChosen7.16335.86760.1716016662716811628.8207959846633137.1633078
Unchosen6.25515.52230.1716016662716811628.8207959846633136.255101167
RAIL_FAREChosen1.501.51.51.5078
Unchosen1.501.51.51.501167
Graphs are represented as pie charts if the data element has 4 or fewer distinct values.
Histograms are green if the displayed range truncates some extreme outliers.
Histograms are orange if the zeros are numerous and have been excluded.
<matplotlib.figure.Figure at 0x11e18f668>
In [12]:
## Saving the model
m_filename = os.path.join(directory, 'exampville_workmodechoice_v1.larchmodel')
m.save(m_filename)
m_filename
Out[12]:
'/var/folders/c3/sgn_gh2j3kv37txqvspbnyzw0000gn/T/tmp7ya6g3ip/exampville_workmodechoice_v1.larchmodel'
In [13]:
m2 = larch.Model.load(m_filename)
print(m2)
============================================================================================
Exampville Work Tour Mode Choice v1
============================================================================================
Model Parameter Estimates
--------------------------------------------------------------------------------------------
Parameter       InitValue       FinalValue      StdError        t-Stat          NullValue
~~ Level of Service ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
InVehTime        0              -0.112743        0.0135618      -8.31328         0
Cost             0              -0.340739        0.181214       -1.88032         0
~~ Alternative Specific Constants ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
ASC_SR           0              -1.39432         0.954061       -1.46145         0
ASC_Walk         0               0.731772        0.301454        2.42747         0
ASC_Bike         0              -1.66675         0.273898       -6.08529         0
ASC_Transit      0              -1.67707         0.497688       -3.36972         0
~~ Income ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
HighInc:SR       0              -1.75624         1.28965        -1.3618          0
HighInc:Walk     0              -0.795808        0.418918       -1.89967         0
HighInc:Bike     0              -1.05259         0.463062       -2.27312         0
HighInc:Transit  0              -1.24099         0.450795       -2.7529          0
~~ Logsum Parameters ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Nest:Car         1               0.741703        0.538446       -0.479709        1
Nest:NonMotor    1               0.869697        0.183454       -0.710277        1
Nest:Motorized   1               0.844398        0.248671       -0.625735        1
============================================================================================
Model Estimation Statistics
--------------------------------------------------------------------------------------------
Log Likelihood at Convergence           -963.84
Log Likelihood at Null Parameters       -2592.70
--------------------------------------------------------------------------------------------
Rho Squared w.r.t. Null Parameters      0.628
============================================================================================
Latest Estimation Run Statistics
--------------------------------------------------------------------------------------------
Number of Iterations            42
Running Time                    0.511   seconds
- setup                         0:00.08
- null_likelihood               0:00
- weight choice rebalance       0:00
- weight autorescale            0:00
- optimize:SLSQP                0:00.33
- parameter covariance          0:00.09
- cleanup                       0:00
Notes                           did not automatically rescale weights
Results                         success
============================================================================================

Building Mode Choice Logsums

We’re going to build the mode choice logsums, and store them in the DT for use in the destination choice model.

One important thing to recall is that we applied some exclusion filters to the DT, to get only work tours (as opposed to all tours). In order to remain consistent, new arrays we add to the DT need to have the same number of rows as existing DT arrays (and the same number of caseids).

In [14]:
print(d.nCases())
print(d.nAllCases())
1897
6123

The nCases method returns the number of active (non-excluded) cases, while the nAllCases method counts all of the cases in the DT, whether they are active or not.

Another concern is that the non-excluded cases are not a contiguous set; they are spread around the list of all cases. Fortunately, we have a method to extract the current active case indexes, which we will use to expand the logsums we create and push them back into the DT in the correct places.

In [15]:
screen_idx = d.get_screen_indexes()
print(screen_idx)
print(screen_idx.shape)
[   0    2    3 ..., 6112 6116 6119]
(1897,)
In [16]:
flog = larch.logging.flogger('modelogsum')

# First create a new blank array for the logsums.
d.new_idca('MODECHOICELOGSUM', numpy.zeros([d.nAllCases(), nZones], dtype=numpy.float32))

# We will also create a seperate array in memory, to cache the logsums we calculate
# so that we can push all the calculated values into the DT on disk in one pass at the end.
# Note the in-memory version is sized for nCases, while the disk version used nAllCases.
modechoicelogsums = numpy.zeros([d.nCases(), nZones], dtype=numpy.float32)

m.setUp()
# If the model was just estimated, this is unnecessary; otherwise this is necessary
# to allocate the needed arrays for storing data and results.

# Loop over all TAZ indexes based on the number of zones
for dtazi in range(nZones):
    # First we pull in replacement data from the skims,
    # replacing the data for the actually chosen destination with alternative
    # data from the proposed destination
    d.pluck_into_idco(omx, "HOMETAZi", dtazi, overwrite=True)

    # We need to refresh these derived variables too, as they are derived
    # from other data columns we just overloaded...
    d.new_idco("WALKTIME", "DIST / 2.5 * 60 * (DIST<=3)", overwrite=True) # 2.5 mph, 60 minutes per hour, max 3 miles
    d.new_idco("BIKETIME", "DIST / 12 * 60 * (DIST<=15)", overwrite=True)  # 12 mph, 60 minutes per hour, max 15 miles
    d.new_idco("CARCOST", "DIST * 0.20", overwrite=True)  # 20 cents per mile

    # Then we will load the relevant data into the Model
    m.provision()

    # Then we calculate and extract the logsums using the model and store
    # them in the correct indexes of the relevant column in the output array
    modechoicelogsums[:,dtazi] = m.logsums()

d.idca.MODECHOICELOGSUM[screen_idx, :] = modechoicelogsums


[08:39:24 AM]larch.Model: Model.setUp...
[08:39:24 AM]larch.Model: Model.setUp complete
[08:39:25 AM]larch.Model: Provisioning Avail data...
[08:39:25 AM]larch.Model: Provisioning Choice data...
[08:39:25 AM]larch.Model: Provisioning UtilityCO data...
[08:39:25 AM]larch.Model: Provisioning Weight data...
[08:39:25 AM]larch.Model: Provisioning Avail data...
[08:39:25 AM]larch.Model: Provisioning Choice data...
[08:39:25 AM]larch.Model: Provisioning UtilityCO data...
[08:39:25 AM]larch.Model: Provisioning Weight data...
[08:39:25 AM]larch.Model: Provisioning Avail data...
[08:39:25 AM]larch.Model: Provisioning Choice data...
[08:39:25 AM]larch.Model: Provisioning UtilityCO data...
[08:39:25 AM]larch.Model: Provisioning Weight data...
[08:39:25 AM]larch.Model: Provisioning Avail data...
[08:39:25 AM]larch.Model: Provisioning Choice data...
[08:39:25 AM]larch.Model: Provisioning UtilityCO data...
[08:39:25 AM]larch.Model: Provisioning Weight data...
[08:39:26 AM]larch.Model: Provisioning Avail data...
[08:39:26 AM]larch.Model: Provisioning Choice data...
[08:39:26 AM]larch.Model: Provisioning UtilityCO data...
[08:39:26 AM]larch.Model: Provisioning Weight data...
[08:39:26 AM]larch.Model: Provisioning Avail data...
[08:39:26 AM]larch.Model: Provisioning Choice data...
[08:39:26 AM]larch.Model: Provisioning UtilityCO data...
[08:39:26 AM]larch.Model: Provisioning Weight data...
[08:39:26 AM]larch.Model: Provisioning Avail data...
[08:39:26 AM]larch.Model: Provisioning Choice data...
[08:39:26 AM]larch.Model: Provisioning UtilityCO data...
[08:39:26 AM]larch.Model: Provisioning Weight data...
[08:39:27 AM]larch.Model: Provisioning Avail data...
[08:39:27 AM]larch.Model: Provisioning Choice data...
[08:39:27 AM]larch.Model: Provisioning UtilityCO data...
[08:39:27 AM]larch.Model: Provisioning Weight data...
[08:39:27 AM]larch.Model: Provisioning Avail data...
[08:39:27 AM]larch.Model: Provisioning Choice data...
[08:39:27 AM]larch.Model: Provisioning UtilityCO data...
[08:39:27 AM]larch.Model: Provisioning Weight data...
[08:39:27 AM]larch.Model: Provisioning Avail data...
[08:39:27 AM]larch.Model: Provisioning Choice data...
[08:39:27 AM]larch.Model: Provisioning UtilityCO data...
[08:39:27 AM]larch.Model: Provisioning Weight data...
[08:39:27 AM]larch.Model: Provisioning Avail data...
[08:39:27 AM]larch.Model: Provisioning Choice data...
[08:39:27 AM]larch.Model: Provisioning UtilityCO data...
[08:39:27 AM]larch.Model: Provisioning Weight data...
[08:39:28 AM]larch.Model: Provisioning Avail data...
[08:39:28 AM]larch.Model: Provisioning Choice data...
[08:39:28 AM]larch.Model: Provisioning UtilityCO data...
[08:39:28 AM]larch.Model: Provisioning Weight data...
[08:39:28 AM]larch.Model: Provisioning Avail data...
[08:39:28 AM]larch.Model: Provisioning Choice data...
[08:39:28 AM]larch.Model: Provisioning UtilityCO data...
[08:39:28 AM]larch.Model: Provisioning Weight data...
[08:39:28 AM]larch.Model: Provisioning Avail data...
[08:39:28 AM]larch.Model: Provisioning Choice data...
[08:39:28 AM]larch.Model: Provisioning UtilityCO data...
[08:39:28 AM]larch.Model: Provisioning Weight data...
[08:39:28 AM]larch.Model: Provisioning Avail data...
[08:39:28 AM]larch.Model: Provisioning Choice data...
[08:39:28 AM]larch.Model: Provisioning UtilityCO data...
[08:39:28 AM]larch.Model: Provisioning Weight data...

Destination Choice Data

In [17]:
d.set_alternatives( numpy.arange(1,nZones+1) )

# Due to a limitation of HDF5, we are not able to
# give a choice_idco dictionary for models with lots of alternatives,
# as there is a hard limit on the quantity of data in the header for
# each array.  This limitation will probably be circumvented in a
# future version of Larch, but for now we must manually create and
# store an entire idca array.
ch = numpy.zeros([d.nAllCases(), nZones ], dtype=numpy.float32)
dtaz = d.idco.DTAZ[:]
ch[range(dtaz.shape[0]), dtaz-1] = 1
d.new_idca_from_array('_choice_', ch, overwrite=True)

# We're also gonna overwrite the _avail_ array, so that
# all destinations are available for all tours.
d.new_idca_from_array('_avail_', numpy.ones([d.nAllCases(), nZones ], dtype=bool), overwrite=True)
In [18]:
# First we purge all of the OMX columns we plucked in the past
for i in omx.data:
    d.delete_data(i.name)
for i in omx.lookup:
    d.delete_data(i.name)


# Then we can attached the OMX as an external data souce, linking in
# entire rows from the skim matrixes, plus the lookups which are OTAZ-generic
d.idca.add_external_omx(omx, rowindexnode=d.idco.HOMETAZi, n_alts=nZones)

d.info(1)
Out[18]:

<larch.DT> tmpgnonj_w5.h5f

VariabledtypeSource FileShapeOriginal Source
idco
AGEint64/var/folders/c3/sgn_gh2j3kv37txqvspbnyzw0000gn/T/tmpgnonj_w5.h5f(6123,)/var/folders/c3/sgn_gh2j3kv37txqvspbnyzw0000gn/T/tmp7ya6g3ip/exampville_person.h5
BIKETIMEfloat64/var/folders/c3/sgn_gh2j3kv37txqvspbnyzw0000gn/T/tmpgnonj_w5.h5f(6123,)
CARCOSTfloat64/var/folders/c3/sgn_gh2j3kv37txqvspbnyzw0000gn/T/tmpgnonj_w5.h5f(6123,)
DTAZint64/var/folders/c3/sgn_gh2j3kv37txqvspbnyzw0000gn/T/tmpgnonj_w5.h5f(6123,)/var/folders/c3/sgn_gh2j3kv37txqvspbnyzw0000gn/T/tmp7ya6g3ip/exampville_tours.h5
DTAZiint64/var/folders/c3/sgn_gh2j3kv37txqvspbnyzw0000gn/T/tmpgnonj_w5.h5f(6123,)
HHIDint64/var/folders/c3/sgn_gh2j3kv37txqvspbnyzw0000gn/T/tmpgnonj_w5.h5f(6123,)/var/folders/c3/sgn_gh2j3kv37txqvspbnyzw0000gn/T/tmp7ya6g3ip/exampville_tours.h5
HHSIZEint64/var/folders/c3/sgn_gh2j3kv37txqvspbnyzw0000gn/T/tmpgnonj_w5.h5f(6123,)/var/folders/c3/sgn_gh2j3kv37txqvspbnyzw0000gn/T/tmp7ya6g3ip/exampville_hh.h5
HOMETAZint64/var/folders/c3/sgn_gh2j3kv37txqvspbnyzw0000gn/T/tmpgnonj_w5.h5f(6123,)/var/folders/c3/sgn_gh2j3kv37txqvspbnyzw0000gn/T/tmp7ya6g3ip/exampville_hh.h5
HOMETAZiint64/var/folders/c3/sgn_gh2j3kv37txqvspbnyzw0000gn/T/tmpgnonj_w5.h5f(6123,)
INCOMEint64/var/folders/c3/sgn_gh2j3kv37txqvspbnyzw0000gn/T/tmpgnonj_w5.h5f(6123,)/var/folders/c3/sgn_gh2j3kv37txqvspbnyzw0000gn/T/tmp7ya6g3ip/exampville_hh.h5
N_OTHERTOURSint64/var/folders/c3/sgn_gh2j3kv37txqvspbnyzw0000gn/T/tmpgnonj_w5.h5f(6123,)/var/folders/c3/sgn_gh2j3kv37txqvspbnyzw0000gn/T/tmp7ya6g3ip/exampville_person.h5
N_TOTALTOURSint64/var/folders/c3/sgn_gh2j3kv37txqvspbnyzw0000gn/T/tmpgnonj_w5.h5f(6123,)/var/folders/c3/sgn_gh2j3kv37txqvspbnyzw0000gn/T/tmp7ya6g3ip/exampville_person.h5
N_WORKTOURSint64/var/folders/c3/sgn_gh2j3kv37txqvspbnyzw0000gn/T/tmpgnonj_w5.h5f(6123,)/var/folders/c3/sgn_gh2j3kv37txqvspbnyzw0000gn/T/tmp7ya6g3ip/exampville_person.h5
PERSONIDint64/var/folders/c3/sgn_gh2j3kv37txqvspbnyzw0000gn/T/tmpgnonj_w5.h5f(6123,)/var/folders/c3/sgn_gh2j3kv37txqvspbnyzw0000gn/T/tmp7ya6g3ip/exampville_tours.h5
TOURMODEint64/var/folders/c3/sgn_gh2j3kv37txqvspbnyzw0000gn/T/tmpgnonj_w5.h5f(6123,)/var/folders/c3/sgn_gh2j3kv37txqvspbnyzw0000gn/T/tmp7ya6g3ip/exampville_tours.h5
TOURPURPint64/var/folders/c3/sgn_gh2j3kv37txqvspbnyzw0000gn/T/tmpgnonj_w5.h5f(6123,)/var/folders/c3/sgn_gh2j3kv37txqvspbnyzw0000gn/T/tmp7ya6g3ip/exampville_tours.h5
WALKTIMEfloat64/var/folders/c3/sgn_gh2j3kv37txqvspbnyzw0000gn/T/tmpgnonj_w5.h5f(6123,)
WORKSint64/var/folders/c3/sgn_gh2j3kv37txqvspbnyzw0000gn/T/tmpgnonj_w5.h5f(6123,)/var/folders/c3/sgn_gh2j3kv37txqvspbnyzw0000gn/T/tmp7ya6g3ip/exampville_person.h5
caseidint64/var/folders/c3/sgn_gh2j3kv37txqvspbnyzw0000gn/T/tmpgnonj_w5.h5f(6123,)/var/folders/c3/sgn_gh2j3kv37txqvspbnyzw0000gn/T/tmp7ya6g3ip/exampville_tours.h5
idca
AUTO_TIMEfloat64/var/folders/c3/sgn_gh2j3kv37txqvspbnyzw0000gn/T/tmp7ya6g3ip/exampville.omx(6123, 15)
DISTfloat64/var/folders/c3/sgn_gh2j3kv37txqvspbnyzw0000gn/T/tmp7ya6g3ip/exampville.omx(6123, 15)
EMPLOYMENTfloat64/var/folders/c3/sgn_gh2j3kv37txqvspbnyzw0000gn/T/tmp7ya6g3ip/exampville.omx(6123, 15)
EMP_NONRETAILfloat64/var/folders/c3/sgn_gh2j3kv37txqvspbnyzw0000gn/T/tmp7ya6g3ip/exampville.omx(6123, 15)
EMP_RETAILfloat64/var/folders/c3/sgn_gh2j3kv37txqvspbnyzw0000gn/T/tmp7ya6g3ip/exampville.omx(6123, 15)
LATint64/var/folders/c3/sgn_gh2j3kv37txqvspbnyzw0000gn/T/tmp7ya6g3ip/exampville.omx(6123, 15)
LONint64/var/folders/c3/sgn_gh2j3kv37txqvspbnyzw0000gn/T/tmp7ya6g3ip/exampville.omx(6123, 15)
MODECHOICELOGSUMfloat32/var/folders/c3/sgn_gh2j3kv37txqvspbnyzw0000gn/T/tmpgnonj_w5.h5f(6123, 15)
RAIL_FAREfloat64/var/folders/c3/sgn_gh2j3kv37txqvspbnyzw0000gn/T/tmp7ya6g3ip/exampville.omx(6123, 15)
RAIL_TIMEfloat64/var/folders/c3/sgn_gh2j3kv37txqvspbnyzw0000gn/T/tmp7ya6g3ip/exampville.omx(6123, 15)
TAZIDint64/var/folders/c3/sgn_gh2j3kv37txqvspbnyzw0000gn/T/tmp7ya6g3ip/exampville.omx(6123, 15)
_avail_bool/var/folders/c3/sgn_gh2j3kv37txqvspbnyzw0000gn/T/tmpgnonj_w5.h5f(6123, 15)
_choice_float32/var/folders/c3/sgn_gh2j3kv37txqvspbnyzw0000gn/T/tmpgnonj_w5.h5f(6123, 15)

Destination Choice Model Estimation

In [19]:
### DEST CHOICE MODEL

m = larch.Model(d)

m.title = "Exampville Work Tour Destination Choice"

from larch.util.piecewise import log_and_linear_function
dist_func = log_and_linear_function('DIST', baseparam='Distance')

m.utility.ca = (
    + P.ModeChoiceLogSum * X.MODECHOICELOGSUM
    + dist_func
#    + P.Distance * X.DIST
#    + P.Log1pDist * X("log1p(DIST)")
)

m.quantity = (
    + P("EmpRetail_HighInc") * X('EMP_RETAIL * (INCOME>50000)')
    + P("EmpNonRetail_HighInc") * X('EMP_NONRETAIL') * X("INCOME>50000")
    + P("EmpRetail_LowInc") * X('EMP_RETAIL') * X("INCOME<=50000")
    + P("EmpNonRetail_LowInc") * X('EMP_NONRETAIL') * X("INCOME<=50000")

)

m.quantity_scale = P.Theta
m.parameter.Theta(value=0.5, min=0.001, max=1.0, null_value=1.0)

m.parameter.EmpRetail_HighInc(holdfast=1, value=0)
m.parameter.EmpRetail_LowInc(holdfast=1, value=0)

# m.computed_factor_figure_with_derivative(
#             dist_func.evaluator1d('U'),
#             max_x=20,
#             header="Utility by Distance",
#             xaxis_label="Freeflow Highway Distance (miles)",
#             yaxis_label="Utility",
#             short_header="U by Distance",
#         )

m.maximize_loglike()
m.xhtml_report(filename=os.path.join(directory,'destination_choice_model.html'), cats='**')
[08:39:29 AM]larch.Model: Provisioning Avail data...
[08:39:29 AM]larch.Model: Provisioning Choice data...
[08:39:29 AM]larch.Model: Provisioning QuantityCA data...
[08:39:29 AM]larch.Model: Provisioning Weight data...
[08:39:29 AM]larch.Model: Provisioning UtilityCA data...
[08:39:29 AM]larch.Model: Model.setUp...
[08:39:29 AM]larch.Model: Model.setUp complete
[08:39:29 AM]larch.Model: Using <larch.util.optimize.OptimizeTechnique SLSQP>
[08:39:29 AM]larch.Model: Optimization terminated successfully. [SLSQP]
[08:39:29 AM]larch.Model: Ending leg (success) using SLSQP at [  1.05287419e+00  -9.35971236e-04  -5.20771849e-02   8.11518368e-01   0.00000000e+00   0.00000000e+00   4.64005976e-01  -8.17847501e-01]
[08:39:30 AM]larch.Model: Preliminary Results
+--------------------+---------------+----------+------+----------+
 Parameter            Estimated Value Std Error  t-Stat Null Value
+--------------------+---------------+----------+------+----------+
 ModeChoiceLogSum      1.053           nan        nan    0
 Distance             -0.000936        nan        nan    0
 logDistanceP1        -0.05208         nan        nan    0
 Theta                 0.8115          nan        nan    1
 EmpRetail_HighInc     0              fixed value        0
 EmpRetail_LowInc      0              fixed value        0
 EmpNonRetail_HighInc  0.464           nan        nan    0
 EmpNonRetail_LowInc  -0.8178          nan        nan    0
+--------------------+---------------+----------+------+----------+
[08:39:30 AM]larch.Model: Final Results
+--------------------+---------------+----------+------+----------+
 Parameter            Estimated Value Std Error  t-Stat Null Value
+--------------------+---------------+----------+------+----------+
 ModeChoiceLogSum      1.053           0.08195    12.85  0
 Distance             -0.000936        0.02497   -0.04   0
 logDistanceP1        -0.05208         0.1165    -0.45   0
 Theta                 0.8115          0.04343   -4.34   1
 EmpRetail_HighInc     0              fixed value        0
 EmpRetail_LowInc      0              fixed value        0
 EmpNonRetail_HighInc  0.464           0.2182     2.13   0
 EmpNonRetail_LowInc  -0.8178          0.3048    -2.68   0
+--------------------+---------------+----------+------+----------+
<matplotlib.figure.Figure at 0x11dcb0048>
In [21]:
m.svg_computed_factor_figure_with_derivative(dist_func)
y_funcs= [('U', <function LinearFunction.evaluator1d.<locals>.<lambda> at 0x11db30d08>)]
type y_funcs= <class 'list'>
Out[21]:
<matplotlib.figure.Figure at 0x11f868860>
In [ ]:
m.jupyter('params','ll','latest','UTILITYSPEC','PROBABILITYSPEC','DATA', 'excludedcases', 'NOTES')