Skip to main content
Log in

Free fermion six vertex model: symmetric functions and random domino tilings

  • Published:
Selecta Mathematica Aims and scope Submit manuscript

Abstract

Our work deals with symmetric rational functions and probabilistic models based on the fully inhomogeneous six vertex (ice type) model satisfying the free fermion condition. Two families of symmetric rational functions \(F_\lambda ,G_\lambda \) are defined as certain partition functions of the six vertex model, with variables corresponding to row rapidities, and the labeling signatures \(\lambda =(\lambda _1\ge \ldots \ge \lambda _N)\in {\mathbb {Z}}^N\) encoding boundary conditions. These symmetric functions generalize Schur symmetric polynomials, as well as some of their variations, such as factorial and supersymmetric Schur polynomials. Cauchy type summation identities for \(F_\lambda ,G_\lambda \) and their skew counterparts follow from the Yang–Baxter equation. Using algebraic Bethe Ansatz, we obtain a double alternant type formula for \(F_\lambda \) and a Sergeev–Pragacz type formula for \(G_\lambda \). In the spirit of the theory of Schur processes, we define probability measures on sequences of signatures with probability weights proportional to products of our symmetric functions. We show that these measures can be viewed as determinantal point processes, and we express their correlation kernels in a double contour integral form. We present two proofs: The first is a direct computation of Eynard–Mehta type, and the second uses non-standard, inhomogeneous versions of fermionic operators in a Fock space coming from the algebraic Bethe Ansatz for the six vertex model. We also interpret our determinantal processes as random domino tilings of a half-strip with inhomogeneous domino weights. In the bulk, we show that the lattice asymptotic behavior of such domino tilings is described by a new determinantal point process on \({\mathbb {Z}}^{2}\), which can be viewed as an doubly-inhomogeneous generalization of the extended discrete sine process.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19
Fig. 20
Fig. 21

Similar content being viewed by others

Notes

  1. The term “bulk” refers to the parts of the system where the space can be rescaled to form growing regions with unit particle density.

  2. All square roots involved in identities in this proposition and throughout the section are always squared in the action of the operators, so we do not need to specify the branches.

References

  1. Aggarwal, A., Borodin, A., Wheeler, M.: Colored Fermionic Vertex Models and Symmetric Functions, arXiv preprint (2021). arXiv:2101.01605 [math.CO]

  2. Aggarwal, A.: Universality for Lozenge Tiling Local Statistics, arXiv preprint (2019). arXiv:1907.09991 [math.PR]

  3. Anderson, G.W., Guionnet, A., Zeitouni, O.: An Introduction to Random Matrices. Cambridge University Press, Cambridge (2010)

    MATH  Google Scholar 

  4. Assiotis, T.: Determinantal structures in space inhomogeneous dynamics on interlacing arrays. Ann. Inst. H. Poincaré 21, 909–940 (2020). arXiv:1910.09500 [math.PR]

    MathSciNet  MATH  Google Scholar 

  5. Baxter, R.: Exactly Solved Models in Statistical Mechanics. Academic Press, London (1989)

    MATH  Google Scholar 

  6. Betea, D., Bouttier, J.: The periodic Schur process and free fermions at finite temperature. Math. Phys. Anal. Geom. 22(1), 3 (2019). arXiv:1807.09022 [math-ph]

    MathSciNet  MATH  Google Scholar 

  7. Boutillier, C., Bouttier, J., Chapuy, G., Corteel, S., Ramassamy, S.: Dimers on rail yard graphs. Ann. Inst. Henri Poincaré D 4(4), 479–539 (2017). arXiv:1504.05176 [math-ph]

    MathSciNet  MATH  Google Scholar 

  8. Brubaker, B., Bump, D., Friedberg, S.: Schur polynomials and the Yang–Baxter equation. Commun. Math. Phys. 308(2), 281 (2011)

    MathSciNet  MATH  Google Scholar 

  9. Borodin, A., Corwin, I.: Macdonald processes. Probab. Theory Relat. Fields 158, 225–400 (2014). arXiv:1111.4408 [math.PR]

    MathSciNet  MATH  Google Scholar 

  10. Bouttier, J., Chapuy, G., Corteel, S.: From Aztec diamonds to pyramids: steep tilings. Trans. AMS 369(8), 5921–5959 (2017). arXiv:1407.0665 [math.CO]

    MathSciNet  MATH  Google Scholar 

  11. Berggren, T.: Domino tilings of the Aztec diamond with doubly periodic weightings. Ann. Probab. 49(4), 1965–2011 (2021). arXiv:1911.01250 [math.PR]

    MathSciNet  MATH  Google Scholar 

  12. Borodin, A., Ferrari, P.: Anisotropic growth of random surfaces in 2+1 dimensions. Commun. Math. Phys. 325, 603–684 (2014). arXiv:0804.3035 [math-ph]

    MathSciNet  MATH  Google Scholar 

  13. Bump, D., McNamara, P., Nakasuji, M.: Factorial Schur functions and the Yang–Baxter equation. Rikkyo-daigaku-sugaku-zasshi 63(1–2), 23–45 (2014). arXiv:1108.3087 [math.CO]

    MathSciNet  MATH  Google Scholar 

  14. Borodin, A., Olshanski, G.: Distributions on partitions, point processes, and the hypergeometric kernel. Commun. Math. Phys. 211(2), 335–358 (2000). arXiv:math/9904010 [math.RT]

    MathSciNet  MATH  Google Scholar 

  15. Borodin, A., Okounkov, A., Olshanski, G.: Asymptotics of Plancherel measures for symmetric groups. J. AMS 13(3), 481–515 (2000). arXiv:math/9905032 [math.CO]

    MathSciNet  MATH  Google Scholar 

  16. Borodin, A.: Periodic Schur process and cylindric partitions. Duke J. Math. 140(3), 391–468 (2007). arXiv:math/0601019 [math.CO]

    MathSciNet  MATH  Google Scholar 

  17. Borodin, A.: Determinantal Point Processes, Oxford Handbook of Random Matrix Theory (2011). arXiv:0911.1153 [math.PR]

  18. Borodin, A., Peche, S.: Airy kernel with two sets of parameters in directed percolation and random matrix theory. J. Stat. Phys. 132(2), 275–290 (2008). arXiv:0712.1086v3 [math-ph]

    MathSciNet  MATH  Google Scholar 

  19. Borodin, A., Petrov, L.: Higher spin six vertex model and symmetric rational functions. Selecta Math. 24(2), 751–874 (2018). arXiv:1601.05770 [math.PR]

    MathSciNet  MATH  Google Scholar 

  20. Borodin, A., Rains, E.M.: Eynard–Mehta theorem, Schur process, and their Pfaffian analogs. J. Stat. Phys. 121(3), 291–317 (2005). arXiv:math-ph/0409059

    MathSciNet  MATH  Google Scholar 

  21. Berele, A., Regev, A.: Hook Young diagrams with applications to combinatorics and representations of Lie superalgebras. Adv. Math. 64(2), 118–175 (1987)

    MathSciNet  MATH  Google Scholar 

  22. Borodin, A., Shlosman, S.: Gibbs ensembles of nonintersecting paths. Commun. Math. Phys. 293(1), 145–170 (2010). arXiv:0804.0564 [math-ph]

    MathSciNet  MATH  Google Scholar 

  23. Cohn, H., Elkies, N., Propp, J.: Local statistics for random domino tilings of the Aztec diamond. Duke Math. J. 85(1), 117–166 (1996). arXiv:math/0008243 [math.CO]

    MathSciNet  MATH  Google Scholar 

  24. Charlier, C.: Doubly periodic lozenge tilings of a hexagon and matrix valued orthogonal polynomials. Stud. Appl. Math. 146(1), 3–80 (2021). arXiv:2001.11095 [math-ph]

    MathSciNet  MATH  Google Scholar 

  25. Chhita, S., Johansson, K.: Domino statistics of the two-periodic Aztec diamond. Adv. Math. 294, 37–149 (2016). arXiv:1410.2385 [math.PR]

    MathSciNet  MATH  Google Scholar 

  26. Cohn, H., Kenyon, R., Propp, J.: A variational principle for domino tilings. J. AMS 14(2), 297–346 (2001). arXiv:math/0008220 [math.CO]

    MathSciNet  MATH  Google Scholar 

  27. Decreusefond, L., Flint, I., Privault, Nicolas, Giovanni L.T.: Determinantal point processes. Stochastic Analysis for Poisson Point Processes, pp. 311–342 (2016)

  28. Duits, M., Kuijlaars, A.: The two periodic Aztec diamond and matrix valued orthogonal polynomials. J. Eur. Math. Soc. 23(4), 1075–1131 (2020). arXiv:1712.05636 [math.PR]

    MathSciNet  MATH  Google Scholar 

  29. Dyson, F.J.: A Brownian motion model for the eigenvalues of a random matrix. J. Math. Phys. 3(6), 1191–1198 (1962)

    MathSciNet  MATH  Google Scholar 

  30. Elkies, N., Kuperberg, G., Larsen, M., Propp, J.: Alternating-sign matrices and domino tilings. J. Alg. Combin. 1(2–3), 111-132–219-234 (1992)

  31. Eynard, B., Mehta, M.L.: Matrices coupled in a chain: I. Eigenvalue correlations. J. Phys. A 31, 4449–4456 (1998)

    MathSciNet  MATH  Google Scholar 

  32. Faddeev, L.D.: How algebraic Bethe ansatz works for integrable model (1996). Les-Houches lectures (1996). arXiv:hep-th/9605187

  33. Felderhof, B.U.: Diagonalization of the transfer matrix of the free-fermion model. II. Physica 66(2), 279–297 (1973)

    Google Scholar 

  34. Felderhof, B.U.: Diagonalization of the transfer matrix of the free-fermion model. III. Physica 66(3), 509–526 (1973)

    MathSciNet  Google Scholar 

  35. Felderhof, B.U.: Direct diagonalization of the transfer matrix of the zero-field free-fermion model. Physica 65(3), 421–451 (1973)

    MathSciNet  Google Scholar 

  36. Fomin, S., Kirillov, A.N.: Grothendieck polynomials and the Yang–Baxter equation. Proceedings of Formal Power Series and Algebraic Combinatorics, pp. 183–190 (1994)

  37. Fomin, S., Kirillov, A.N.: The Yang–Baxter equation, symmetric functions, and Schubert polynomials. Discrete Math. 153(1–3), 123–143 (1996)

    MathSciNet  MATH  Google Scholar 

  38. Fehér, L., Némethi, A., Rimányi, R.: Equivariant classes of matrix matroid varieties. Comment. Math. Helv. 87(4), 861–889 (2012). arXiv:0812.4871 [math.AG]

    MathSciNet  MATH  Google Scholar 

  39. Forrester, P.J.: Meet Andréief, Bordeaux 1886, and Andreev, Kharkov 1882–1883. Random Matrices Theory Appl. 8(02), 1930001 (2019). arXiv:1806.10411 [math-ph]

    MathSciNet  MATH  Google Scholar 

  40. Ferrari, P.L., Spohn, H.: Domino tilings and the six-vertex model at its free-fermion point. J. Phys. A 39(33), 10297 (2006). arXiv:cond-mat/0605406 [cond-mat.stat-mech]

    MathSciNet  MATH  Google Scholar 

  41. Felder, G., Varchenko, A.: Algebraic Bethe ansatz for the elliptic quantum group \(E_{\tau ,\eta }({\rm sl}_2)\). Nucl. Phys. B 480(1–2), 485–503 (1996). arXiv:q-alg/9605024

    MATH  Google Scholar 

  42. Gaudin, M.: Une démonstration simplifiée du théoreme de wick en mécanique statistique. Nucl. Phys. 15, 89–91 (1960)

    MathSciNet  MATH  Google Scholar 

  43. Gorin, Vadim: Lectures on random lozenge tilings, Cambridge Studies in Advanced Mathematics. Cambridge University Press (2021). https://people.math.wisc.edu/~vadicgor/Random_tilings.pdf

  44. Gleizer, O., Postnikov, A.: Littlewood–Richardson coefficients via Yang–Baxter equation. Int. Math. Res. Not. 2000(14), 741–774 (2000)

    MathSciNet  MATH  Google Scholar 

  45. Gorin, V., Petrov, L.: Universality of local statistics for noncolliding random walks. Ann. Probab. 47(5), 2686–2753 (2019). arXiv:1608.03243 [math.PR]

    MathSciNet  MATH  Google Scholar 

  46. Guo, P., Sun, S.: Identities on factorial grothendieck polynomials. Adv. Appl. Math. 111, 101933 (2019). arXiv:1812.04390 [math.CO]

    MathSciNet  MATH  Google Scholar 

  47. Gunna, A., Scrimshaw, T.: Integrable systems and crystals for edge labeled tableaux, arXiv preprint (2022). arXiv:2202.06004 [math.CO]

  48. Hardt, A.: Lattice Models, Hamiltonian Operators, and Symmetric Functions, arXiv preprint (2021). arXiv:2109.14597 [math.RT]

  49. Hamel, A.M., Goulden, I.P.: Lattice paths and a Sergeev–Pragacz formula for skew supersymmetric functions. Can. J. Math. 47(2), 364–382 (1995)

    MathSciNet  MATH  Google Scholar 

  50. Hough, J.B., Krishnapur, M., Peres, Y., Virág, B.: Determinantal processes and independence. Probab. Surv. 3, 206–229 (2006). arXiv:math/0503110 [math.PR]

    MathSciNet  MATH  Google Scholar 

  51. Ikeda, T., Naruse, H.: Excited Young diagrams and equivariant Schubert calculus. Trans. AMS 361(10), 5193–5221 (2009). arXiv:math/0703637 [math.AG]

    MathSciNet  MATH  Google Scholar 

  52. Johansson, K.: Non-intersecting paths, random tilings and random matrices. Probab. Theory Relat. Fields 123(2), 225–280 (2002). arXiv:math/0011250 [math.PR]

    MathSciNet  MATH  Google Scholar 

  53. Johansson, K.: The arctic circle boundary and the Airy process. Ann. Probab. 33(1), 1–30 (2005). arXiv:math/0306216 [math.PR]

    MathSciNet  MATH  Google Scholar 

  54. Johansson, K.: Random matrices and determinantal processes (2005). arXiv:math-ph/0510038

  55. Kac, Victor G.: Infinite-Dimensional Lie Algebras, 3rd edn. Cambridge University Press, Cambridge (1990) MR1104219 (92k:17038)

  56. Korepin, V., Bogoliubov, N., Izergin, A.: Quantum Inverse Scattering Method and Correlation Functions. Cambridge University Press, Cambridge (1993)

    MATH  Google Scholar 

  57. Kenyon, R.: Dominos and the Gaussian free field. Ann. Probab. 29(3), 1128–1137 (2001). arXiv:math-ph/0002027

    MathSciNet  MATH  Google Scholar 

  58. Kenyon, R.: Lectures on dimers (2009). arXiv:0910.3129 [math.PR]

  59. Kitanine, N., Maillet, J.M., Slavnov, N.A., Terras, V.: Spin–spin correlation functions of the XXZ-12 Heisenberg chain in a magnetic field. Nucl. Phys. B 641(3), 487–518 (2002). arXiv:hep-th/0201045

    MATH  Google Scholar 

  60. Kenyon, R., Okounkov, A.: Limit shapes and the complex Burgers equation. Acta Math. 199(2), 263–302 (2007). arXiv:math-ph/0507007

    MathSciNet  MATH  Google Scholar 

  61. König, W.: Orthogonal polynomial ensembles in probability theory. Probab. Surv. 2, 385–447 (2005). arXiv:math/0403090 [math.PR]

    MathSciNet  MATH  Google Scholar 

  62. König, W., O’Connell, N., Roch, S.: Non-colliding random walks, tandem queues, and discrete orthogonal polynomial ensembles. Electron. J. Probab. 7(5), 1–24 (2002)

    MathSciNet  MATH  Google Scholar 

  63. Korff, C.: Cylindric versions of specialised Macdonald functions and a deformed Verlinde algebra. Commun. Math. Phys. 318(1), 173–246 (2013). arXiv:1110.6356 [math-ph]

    MathSciNet  MATH  Google Scholar 

  64. Korff, C.: Cylindric Hecke characters and Gromov–Witten invariants via the asymmetric six-vertex model. Commun. Math. Phys. 381(2), 591–640 (2021). arXiv:1906.02565 [math-ph]

    MathSciNet  MATH  Google Scholar 

  65. Kenyon, R., Okounkov, A., Sheffield, S.: Dimers and amoebae. Ann. Math. 163, 1019–1056 (2006). arXiv:math-ph/0311005

    MathSciNet  MATH  Google Scholar 

  66. Kirillov, A.N., Reshetikhin, N.Y.: The Bethe ansatz and the combinatorics of Young tableaux. J. Sov. Math. 41(2), 925–955 (1988)

    MathSciNet  MATH  Google Scholar 

  67. Kulesza, A., Taskar, B.: Determinantal point processes for machine learning. Found. Trends Mach. Learn. 5(2–3), 123–286 (2012). arXiv:1207.6083 [stat.ML]

    MATH  Google Scholar 

  68. Lascoux, A.: The 6 vertex model and Schubert polynomials. SIGMA 3, 029 (2007). arXiv:math/0610719 [math.CO]

    MathSciNet  MATH  Google Scholar 

  69. Lascoux, A., Leclerc, B., Thibon, J.-Y.: Flag varieties and the Yang–Baxter equation. Lett. Math. Phys. 40(1), 75–90 (1997)

    MathSciNet  MATH  Google Scholar 

  70. Lyons, R.: Determinantal probability measures. Publ. IHES 98, 167–212 (2003). arXiv:math/0204325 [math.PR]

    MathSciNet  MATH  Google Scholar 

  71. Macchi, O.: The coincidence approach to stochastic point processes. Adv. Appl. Probab. 7(1), 83–122 (1975)

    MathSciNet  MATH  Google Scholar 

  72. Macdonald, I.G.: Schur functions: theme and variations. Sém. Lothar. Combin. 28, 5–39 (1992)

    MathSciNet  MATH  Google Scholar 

  73. Macdonald, I.G.: Symmetric Functions and Hall Polynomials, 2nd edn. Oxford University Press, Oxford (1995)

    MATH  Google Scholar 

  74. McNamara, P.: Factorial Schur functions via the six vertex model, arXiv preprint (2009). arXiv:0910.5288 [math.CO]

  75. Mehta, M.L., Gaudin, M.: On the density of eigenvalues of a random matrix. Nucl. Phys. 18, 420–427 (1960)

    MathSciNet  MATH  Google Scholar 

  76. Mkrtchyan, S.: Plane partitions with 2-periodic weights. Lett. Math. Phys. 104(9), 1053–1078 (2014). arXiv:1309.4825 [math.PR]

    MathSciNet  MATH  Google Scholar 

  77. Mkrtchyan, S.: Plane partitions with 2-periodic weights. Lett. Math. Phys. 104(9), 1053–1078 (2014). arXiv:1309.4825 [math.PR]

    MathSciNet  MATH  Google Scholar 

  78. Molev, A.: Comultiplication rules for the double Schur functions and Cauchy identities. Electron. J. Comb. R13 (2009). arXiv:0807.2127 [math.CO]

  79. Motegi, K.: Izergin–Korepin analysis on the projected wavefunctions of the generalized free-fermion model. Adv. Math. Phys. 2017, 7563781 (2017). arXiv:1704.03575 [math-ph]

    MathSciNet  MATH  Google Scholar 

  80. Motegi, K.: Integrability approach to Fehér-Némethi-Rimányi-Guo-Sun type identities for factorial Grothendieck polynomials. Nucl. Phys. B 954, 114998 (2020). arXiv:1909.02278 [math.CO]

    MATH  Google Scholar 

  81. Morales, A.H., Pak, I., Panova, G.: Hook formulas for skew shapes II. Combinatorial proofs and enumerative applications. SIAM J. Discrete Math. 31(3), 1953–1989 (2017). arXiv:1610.04744 [math.CO]

    MathSciNet  MATH  Google Scholar 

  82. Morales, A., Pak, I., Panova, G.: Hook formulas for skew shapes III. Multivariate and product formulas. Alg. Combin. 2(5), 815–861 (2019). arXiv:1707.00931 [math.CO]

    MathSciNet  MATH  Google Scholar 

  83. Moens, E.M., Van der Jeugt, J.: A determinantal formula for supersymmetric Schur polynomials. J. Alg. Combin. 17(3), 283–307 (2003)

    MathSciNet  MATH  Google Scholar 

  84. Nagao, T., Forrester, P.J.: Multilevel dynamical correlation functions for Dyson’s Brownian motion model of random matrices. Phys. Lett. A 247(1–2), 42–46 (1998)

    Google Scholar 

  85. Nakagawa, J., Noumi, M., Shirakawa, M., Yamada, Y.: Tableau representation for Macdonald’s ninth variation of Schur functions. Phys. Combin. (2001). https://doi.org/10.1142/9789812810007_0008

  86. Okounkov, A.: Infinite wedge and random partitions. Selecta Math. 7(1), 57–81 (2001). arXiv:math/9907127 [math.RT]

    MathSciNet  MATH  Google Scholar 

  87. Okounkov, A.: Symmetric functions and random partitions, Symmetric functions 2001: Surveys of developments and perspectives (2002). arXiv:math/0309074 [math.CO]

  88. Olshanski, G.: Interpolation Macdonald polynomials and Cauchy-type identities. J. Combin. Theory A 162, 65–117 (2019). arXiv:1712.08018 [math.CO]

    MathSciNet  MATH  Google Scholar 

  89. Oota, T.: Quantum projectors and local operators in lattice integrable models. J. Phys. A 37(2), 441 (2003). arXiv:hep-th/0304205

    MathSciNet  MATH  Google Scholar 

  90. Okounkov, A., Reshetikhin, N.: Correlation function of Schur process with application to local geometry of a random 3-dimensional Young diagram. J. AMS 16(3), 581–603 (2003). arXiv:math/0107056 [math.CO]

    MathSciNet  MATH  Google Scholar 

  91. Pauling, L.: The structure and entropy of ice and of other crystals with some randomness of atomic arrangement. J. Am. Chem. Soc. 57(12), 2680–2684 (1935)

    Google Scholar 

  92. Petrov, L.: Asymptotics of uniformly random lozenge tilings of polygons. Gaussian free field. Ann. Probab. 43(1), 1–43 (2015), available at 1206.5123. arXiv:1206.5123 [math.PR]

  93. Pak, I., Petrov, F.: Hidden symmetries of weighted lozenge tilings. Electron. J. Combin. 27(3), 3–44 (2020). arXiv:2003.14236 [math.CO]

    MathSciNet  MATH  Google Scholar 

  94. Reshetikhin, N.: Lectures on the integrability of the 6-vertex model, Exact Methods in Low-dimensional Statistical Physics and Quantum Computing, pp. 197–266 (2010). arXiv:1010.5031 [math-ph]

  95. Sheffield, S.: Random surfaces, Astérisque 304 (2005). arXiv:math/0304049 [math.PR]

  96. Soshnikov, A.: Determinantal random point fields. Russ. Math. Surv. 55(5), 923–975 (2000). arXiv:math/0002099 [math.PR]

    MathSciNet  MATH  Google Scholar 

  97. Tsilevich, N.: Quantum inverse scattering method for the q-boson model and symmetric functions. Funct. Anal. Appl. 40(3), 207–217 (2006). arXiv:math-ph/0510073

    MathSciNet  MATH  Google Scholar 

  98. Wheeler, M., Zinn-Justin, P.: Hall polynomials, inverse Kostka polynomials and puzzles. J. Combin. Theory A 159, 107–163 (2018). arXiv:1603.01815 [math-ph]

    MathSciNet  MATH  Google Scholar 

  99. Yau, H.-T.: The Wigner–Dyson–Gaudin-Mehta Conjecture. Notices of the international congress of Chinese mathematicians, pp. 10–13 (2013)

  100. Zinn-Justin, P.: Six-vertex model with domain wall boundary conditions and one-matrix model. Phys. Rev. E 62(3), 3411 (2000). arXiv:math-ph/0005008

    MathSciNet  Google Scholar 

  101. Zinn-Justin, P.: Littlewood–Richardson coefficients and integrable tilings. Electron. J. Combin. 16(R12), 1 (2009). arXiv:0809.2392 [math-ph]

    MathSciNet  MATH  Google Scholar 

  102. Zinn-Justin, P.: Six-Vertex. Integrability and Combinatorics, Loop and Tiling models (2009). arXiv:0901.0665 [math-ph]

Download references

Acknowledgements

Amol Aggarwal was partially supported by a Clay Research Fellowship. Alexei Borodin was partially supported by the NSF grants DMS-1664619, DMS-1853981, and the Simons Investigator program. Leonid Petrov was partially supported by the NSF grant DMS-1664617, and the Simons Collaboration Grant for Mathematicians 709055. Michael Wheeler was partially supported by an Australian Research Council Future Fellowship, grant FT200100981. This material is based upon work supported by the National Science Foundation under Grant No. DMS-1928930 while Aggarwal and Petrov participated in program hosted by the Mathematical Sciences Research institute in Berkeley, California, during the Fall 2021 semester. We are very grateful to the anonymous referees for numerous helpful remarks.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Leonid Petrov.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Part IV Appendix

Formulas for \(F_\lambda \) and \(G_\lambda \)

Here we employ the row operators (defined in Sect. 2.3) to get explicit formulas for the partition functions \(F_\lambda \) and \(G_\lambda \) of the free fermion six vertex model, and thus prove Theorems 3.9 and 3.10. This Appendix accompanies Sect. 3 and employs algebraic Bethe Ansatz type computations. They follow [19, Section 4.5] (but are more involved in the case of \(G_\lambda \)), see also Part VII and in particular Appendix VII.2 of [56].

1.1 Proof of Theorem 3.9

1.1.1 Recalling the notation

Throughout this subsection we fix a signature \(\lambda =(\lambda _1,\ldots ,\lambda _N \ge 0)\) with N parts, and sequences

$$\begin{aligned} {\textbf{x}}=(x_1,\ldots ,x_N ),\qquad {\textbf{y}}=(y_1,y_2,\ldots ),\qquad {\textbf{r}}=(r_1,\ldots ,r_N ),\qquad {\textbf{s}}=(s_1,s_2,\ldots ). \end{aligned}$$

Recall (Definition 3.3) that the function \(F_\lambda ({\textbf{x}};{\textbf{y}};{\textbf{r}};{\textbf{s}})\) is the partition function of the free fermion six vertex model with weights \({\widehat{W}}\) (2.4) and with boundary conditions determined by \(\lambda \).

In this subsection we prove Theorem 3.9 stating that \(F_\lambda \) is given by the determinantal expression (3.12) involving the functions \(\varphi _k(x)\) (3.11). For convenience, let us explicitly reproduce the desired formula here:

$$\begin{aligned} \begin{aligned} F_\lambda ({\textbf{x}};{\textbf{y}};{\textbf{r}};{\textbf{s}})&= \Biggl ( \prod _{i=1}^{N}x_i(r^{-2}_i-1) \prod _{1\le i<j\le N}\frac{r_i^{-2}x_i-x_j}{x_i-x_j} \Biggr )\\&\quad \times \det \biggl [ \frac{1}{y_{\lambda _j+N-j+1}-x_i} \prod _{m=1}^{\lambda _j+N-j} \frac{y_m-s_m^2x_i}{s_m^2(y_m-x_i)} \biggr ]_{i,j=1}^{N}. \end{aligned} \end{aligned}$$
(A.1)

For the proof we will need the row operators \({\widehat{A}},{\widehat{B}},{\widehat{C}},{\widehat{D}}\) defined by (2.22)–(2.23). These operators are built from the weights \({\widehat{W}}\), depend on two numbers xr and the sequences \({\textbf{y}}, {\textbf{s}}\), and act (from the right) on tensor products of two-dimensional spaces \(V^{(k)}=\mathop {\textrm{span}}\{ e_0^{(k)},e_1^{(k)} \} \simeq {\mathbb {C}}^2\). To the signature \(\lambda \) we associate the element \(e_{{\mathcal {S}}(\lambda )}\) in the (formal) infinite tensor product \(V^{(1)}\otimes V^{(2)}\otimes \ldots \), where we take \(e^{(k)}_1\) in the k-th place if and only if \(k\in {\mathcal {S}}(\lambda )\) and \(e_0^{(k)}\) otherwise, see Sect. 3.1. For example, the empty signature  \(\varnothing \) (which has 0 parts) corresponds to \(e_{\varnothing }=e_0^{(1)}\otimes e_0^{(2)}\otimes \ldots \).

By Proposition 3.4, \(F_\lambda ({\textbf{x}};{\textbf{y}};{\textbf{r}};{\textbf{s}})\) is the coefficient of \(e_{{\mathcal {S}}(\lambda )}\) in \(e_{\varnothing }{\widehat{B}}(x_N,r_N)\ldots {\widehat{B}}(x_1,r_1) \), and for the proof of Theorem 3.9 we proceed to evaluate this coefficient. One of our main tools is the Yang–Baxter equation stated as a family of commutation relations between the operators (see Proposition 2.5).

1.1.2 Action on a tensor product of two spaces

The crucial part of the argument is to consider the action of \({\widehat{B}}(x_N,r_N)\ldots {\widehat{B}}(x_1,r_1)\) on a tensor product of two spaces, \(V_1\otimes V_2\). Using the second identity from (2.23), namely, \((v_1 \otimes v_2) {\widehat{B}} = v_1 {\widehat{D}} \otimes v_2 {\widehat{B}} + v_1 {\widehat{B}} \otimes v_2 {\widehat{A}}\), we see that

$$\begin{aligned} {\widehat{B}}(x_N,r_N)\ldots {\widehat{B}}(x_1,r_1) = \sum _{{\mathcal {I}}\subseteq \left\{ 1,\ldots ,N \right\} } X_{{\mathcal {I}}}({\textbf{x}};{\textbf{r}})\otimes Y_{{\mathcal {I}}}({\textbf{x}};{\textbf{r}}), \end{aligned}$$
(A.2)

where

$$\begin{aligned} \begin{aligned} X_{{\mathcal {I}}}({\textbf{x}};{\textbf{r}})&=X_N({\mathcal {I}};x_N,r_N)X_{N-1}({\mathcal {I}};x_{N-1},r_{N-1})\ldots X_1({\mathcal {I}};x_1,r_1),\\ Y_{{\mathcal {I}}}({\textbf{x}};{\textbf{r}})&=Y_N({\mathcal {I}};x_N,r_N)Y_{N-1}({\mathcal {I}};x_{N-1},r_{N-1})\ldots Y_1({\mathcal {I}};x_1,r_1),\\ X_i({\mathcal {I}};x_i,r_i)&={\left\{ \begin{array}{ll} {\widehat{D}}(x_i,r_i),&{} \quad i \in {\mathcal {I}};\\ {\widehat{B}}(x_i,r_i),&{} \quad i \notin {\mathcal {I}}, \end{array}\right. } \qquad \qquad Y_i({\mathcal {I}};x_i,r_i)={\left\{ \begin{array}{ll} {\widehat{B}}(x_i,r_i),&{} \quad i \in {\mathcal {I}};\\ {\widehat{A}}(x_i,r_i),&{} \quad i \notin {\mathcal {I}}. \end{array}\right. } \qquad \end{aligned} \end{aligned}$$

Now, using the commutation relations (2.26)–(2.27) from Proposition 2.5, we move all the operators \({{\widehat{B}}}\) to the right in both \(X_{\mathcal {I}}\) and \(Y_{\mathcal {I}}\), which allows to rewrite (A.2) as

$$\begin{aligned} \begin{aligned}&\sum _{\begin{array}{c} I \cup J = \{1,\ldots , N \} \\ I' \cup J' = \{1,\ldots , N\} \end{array}} c_{I; I'} ({\textbf{x}}; {\textbf{r}}) \, {\widehat{D}} (x_{j_{N - k}}, r_{j_{N - k}}) \ldots {\widehat{D}} (x_{j_1}, r_{j_1}) {\widehat{B}} (x_{i_k}, r_{i_k}) \ldots {\widehat{B}} (x_{i_1}, r_{i_1})\\&\quad \otimes {\widehat{A}} (x_{j_{N - m}'}, r_{j_{N - m}'}) \ldots {\widehat{A}} (x_{j_1'}, r_{j_1'}) {\widehat{B}} (x_{i_m'}, r_{i_m'}) \ldots {\widehat{B}} (x_{i_1'}, r_{i_1'}), \end{aligned} \end{aligned}$$
(A.3)

for some rational functions \(c_{I; I'} ({\textbf{x}}; {\textbf{r}})\), where we have denoted \(|I| = k\) and \(|I'| = m\), defined \(J = \left\{ 1,\ldots ,N \right\} {\setminus } I\) and \(J' = \left\{ 1,\ldots ,N \right\} {\setminus } I'\), and ordered the indices such that \(i_\alpha<i_\beta ,i_\alpha '<i_\beta ',j_\alpha <j_\beta \), and \(j_\alpha '<j_\beta '\) for all \(\alpha <\beta \). Here we also employed the commutativity of \({\widehat{A}}\) (2.24) and \({\widehat{D}}\) (2.28). In fact, here one can already see from (2.26)–(2.27) that \(m=N-k\), but we will get this relation (and a stronger relation between the sets \(I,I',J,J'\)) in the next Lemma A.2.

Remark A.1

Let us make an important observation about the coefficients \(c_{I;I'}({\textbf{x}};{\textbf{r}})\). Namely, these coefficients are computed using only the commutation relations for the operators \({\widehat{A}},{\widehat{B}},{\widehat{C}},{\widehat{D}}\), and we argue that the \(c_{I;I'}({\textbf{x}};{\textbf{r}})\)’s do not depend on the order of applying the commutation relations. This property is based on the fact that for generic parameters (xr), there exists a representation of \(\begin{bmatrix} {\widehat{A}}(x,r)&{}{\widehat{B}}(x,r)\\ {\widehat{C}}(x,r)&{}{\widehat{D}}(x,r) \end{bmatrix}\) subject to the same commutation relations, and a highest weight vector (annihilated by \({\widehat{C}}\) and an eigenfunctions of \({\widehat{A}},{\widehat{D}}\)) \({\textsf{v}}_0\) in that representation, such that vectors \(\Big (\prod _{k\in {\mathcal {K}}}{\widehat{B}}(x_k,r_k)\Big ){\textsf{v}}_0\), with \({\mathcal {K}}\) ranging over all subsets of \(\{1,2,\ldots ,N\}\), are linearly independent. This fact is a corollary of [41, Lemma 14]: our operators are based on the free fermion six vertex weights, and the cited paper deals with more general eight vertex case.

Therefore, if we apply the commutation relations in two ways and get different coefficients \(c_{I;I'}({\textbf{x}};{\textbf{r}})\) in (A.3), then we can apply these commutation relations in the above highest weight representation, which contradicts the linear independence.

Lemma A.2

We have \(c_{I;I'}({\textbf{x}};{\textbf{r}})=0\) if \(I\cap I'\ne \varnothing \) or \(J\cap J'\ne \varnothing \).

Proof

The two claims with \(I\cap I'\ne \varnothing \) and \(J\cap J'\ne \varnothing \) are analogous, so we only prove the first one.

Suppose \(I\cap I'\ne \varnothing \). Since the operators \({{\widehat{B}}}(x,r)\) commute up to a scalar factor (see (2.25)), we may assume that \(I\cap I'\ni N\) by permuting terms in the left-hand side of (A.2).

Observe that no summand in (A.2) with \(X_N ({\mathcal {I}}; x_N, r_N) = {\widehat{D}} (x_N, r_N)\) (i.e., \(N\in {\mathcal {I}}\)) contributes to a nonzero value of \(c_{I; I'} ({\textbf{x}}, {\textbf{r}}) \). Indeed, in this case the operator \({\widehat{D}}(x_N,r_N)\) is the leftmost term in \(X_{{\mathcal {I}}}\), and thus it does not get involved in the commutation relations of the form (2.26), which means that one cannot obtain \({\widehat{B}}(x_N,r_N)\) from this term. Similarly, no summand in (A.2) with \(Y_N ({\mathcal {I}}; x_N, r_N) = {\widehat{A}} (x_N, r_N)\) (i.e., \(N\notin {\mathcal {I}}\)) contributes to a nonzero value of \(c_{I; I'} ({\textbf{x}}; {\textbf{r}})\).

However, for any \({\mathcal {I}}\subset \left\{ 0,1 \right\} ^{N}\) we either have \(X_N ({\mathcal {I}}; x_N, r_N) = {\widehat{D}} (x_N, r_N)\) or \(Y_N({\mathcal {I}}; x_N, r_N) = {\widehat{A}} (x_N, r_N)\), and so we cannot obtain \({\widehat{B}}(x_N,r_N)\) in both tensor factors. Therefore, terms with \(I\cap I'\ne \varnothing \) are zero. \(\square \)

We see that in (A.3) it must be \(I=J'\) and \(I'=J\), and we may abbreviate \(c_{I;I'}=c_I\). We thus rewrite (A.2)–(A.3) as

$$\begin{aligned} \begin{aligned}&{\widehat{B}}(x_N,r_N)\ldots {\widehat{B}}(x_1,r_1)\\&\quad = \sum _{\begin{array}{c} I \cup J = \{1,\ldots , N \} \end{array}} c_{I} ({\textbf{x}}; {\textbf{r}}) \, {\widehat{D}} (x_{j_{N - k}}, r_{j_{N - k}}) \ldots {\widehat{D}} (x_{j_1}, r_{j_1}) {\widehat{B}} (x_{i_k}, r_{i_k}) \ldots {\widehat{B}} (x_{i_1}, r_{i_1})\\&\qquad \otimes {\widehat{A}} (x_{i_k},r_{i_k}) \ldots {\widehat{A}} (x_{i_1}, r_{i_1}) {\widehat{B}} (x_{j_{N-k}}, r_{j_{N-k}}) \ldots {\widehat{B}} (x_{j_1}, r_{j_1}). \end{aligned}\nonumber \\ \end{aligned}$$
(A.4)

We will now evaluate the coefficients \(c_I({\textbf{x}};{\textbf{r}})\). First, set \(I=\left\{ N-k+1,N-k+2,\ldots , N \right\} \). Then the operator

$$\begin{aligned} \begin{aligned}&{\widehat{D}}(x_{N-k},r_{N-k})\ldots {\widehat{D}}(x_1,r_1) {\widehat{B}}(x_{N},r_N)\ldots {\widehat{B}}(x_{N-k+1},r_{N-k+1})\\&\quad \otimes {\widehat{A}}(x_{N},r_N)\ldots {\widehat{A}}(x_{N-k+1},r_{N-k+1}) {\widehat{B}}(x_{N-k},r_{N-k})\ldots {\widehat{B}}(x_1,r_1) \end{aligned} \end{aligned}$$
(A.5)

might come from (A.2) only for \({\mathcal {I}}=\{1,2,\ldots ,N-k \}\), in which case

$$\begin{aligned}{} & {} X_{{\mathcal {I}}}({\textbf{x}};{\textbf{r}})\otimes Y_{{\mathcal {I}}}({\textbf{x}};{\textbf{r}})\\ {}{} & {} \quad = {\widehat{B}}(x_{N},r_N)\ldots {\widehat{B}}(x_{N-k+1},r_{N-k+1}) {\widehat{D}}(x_{N-k},r_{N-k})\ldots {\widehat{D}}(x_1,r_1)\\{} & {} \quad \otimes {\widehat{A}}(x_{N},r_N)\ldots {\widehat{A}}(x_{N-k+1},r_{N-k+1}) {\widehat{B}}(x_{N-k},r_{N-k})\ldots {\widehat{B}}(x_1,r_1). \end{aligned}$$

In the first term, we use the commutation relation (2.26) to place the \({\widehat{D}}\) operators on the left and extract the coefficient \(c_{I}({\textbf{x}};{\textbf{r}})\) of (A.5). We have thus established:

Lemma A.3

For \(I=I_k:=\left\{ N-k+1,N-k+2,\ldots ,N \right\} \), the rational function \(c_I\) is equal to

$$\begin{aligned} c_{I_k} ({\textbf{x}};{\textbf{r}})= \prod _{i=1}^{N-k} \prod _{j=N-k+1}^{N} \frac{r_i^{-2}x_i-x_j}{x_i-x_j}. \end{aligned}$$

We are now in a position to compute \(c_I({\textbf{x}};{\textbf{r}})\) for arbitrary \(I\subset \left\{ 1,\ldots ,N \right\} \) of size k (where k is also arbitrary) by permuting the \({\widehat{B}}\) operators in the left-hand side of (A.2) thanks to the commutation relation (2.25). For each such I, let \(\sigma \) be a permutation of \(\left\{ 1,\ldots ,N \right\} \) which is increasing on the intervals \(\left\{ 1,\ldots ,N-k \right\} \) and \(\left\{ N-k+1,\ldots ,N \right\} \), and sends \(\left\{ N-k+1,N-k+2,\ldots ,N \right\} \) to I.

Lemma A.4

With the above notation, we have

$$\begin{aligned} c_I({\textbf{x}};{\textbf{r}})= & {} \mathop {\textrm{sgn}}(\sigma ) \prod _{1\le i<j\le N}\frac{r_i^{-2}x_i-x_j}{x_i-x_j} \prod _{\begin{array}{c} i,j\in I\\ i<j \end{array}} \left( \frac{r_i^{-2}x_i-x_j}{x_i-x_j} \right) ^{-1}\nonumber \\{} & {} \prod _{\begin{array}{c} i,j\notin I\\ i<j \end{array}} \left( \frac{r_i^{-2}x_i-x_j}{x_i-x_j} \right) ^{-1}. \end{aligned}$$
(A.6)

Proof

The claim follows from the fact that

$$\begin{aligned} c_I({\textbf{x}};{\textbf{r}})= c_{I_k}(\sigma ({\textbf{x}});\sigma ({\textbf{r}})) \prod _{\begin{array}{c} j\le N-k,\, i\ge N-k+1\\ \sigma (i)<\sigma (j) \end{array}} \frac{r_{\sigma (i)}^{-2}x_{\sigma (i)}-x_{\sigma (j)}}{r_{\sigma (j)}^{-2}x_{\sigma (j)}-x_{\sigma (i)}}, \end{aligned}$$

which in turn holds thanks to (2.25) via induction on the length of the permutation \(\sigma \) (which is the minimal number of elementary transpositions required to represent \(\sigma \) as their product). Then we can further simplify:

$$\begin{aligned} \begin{aligned}&c_{I_k}(\sigma ({\textbf{x}});\sigma ({\textbf{r}})) \prod _{\begin{array}{c} j\le N-k,\, i\ge N-k+1\\ \sigma (i)<\sigma (j) \end{array}} \frac{r_{\sigma (i)}^{-2}x_{\sigma (i)}-x_{\sigma (j)}}{r_{\sigma (j)}^{-2}x_{\sigma (j)}-x_{\sigma (i)}} \\&\quad = \prod _{i\in I,\,j\notin I}\frac{x_i-r_j^{-2}x_j}{x_i-x_j} \prod _{i\in I,\,j\notin I \, i<j}\frac{x_j-r_i^{-2}x_i}{x_i-r_j^{-2}x_j} \\&\quad = \prod _{i\in I, \, j\notin I}(x_i-x_j)^{-1} \prod _{i,j\in I, \, i<j}(x_j-r_i^{-2}x_i)^{-1}\\&\qquad \prod _{i,j\notin I, \, i<j}(x_j-r_i^{-2}x_i)^{-1} \prod _{1\le i<j\le N}(x_j-r_i^{-2}x_i), \end{aligned} \end{aligned}$$

which leads to the desired right-hand side of (A.6). The signature of the permutation \(\sigma \) arises by turning \(x_i-x_j\) into \(x_j-x_i\) for each pair \(i\notin I,\,j\in I\) with \(i>j\). \(\square \)

1.1.3 Completing the proof

We are now in a position to prove the determinantal formula (A.1), which finalizes the proof of Theorem 3.9. The goal is to express the coefficient of \(e_{{\mathcal {S}}(\lambda )}\) in \(e_{\varnothing }{\widehat{B}}(x_N,r_N)\ldots {\widehat{B}}(x_1,r_1) \). We are going to repeatedly apply identity (A.4) (with the coefficients \(c_I({\textbf{x}};{\textbf{r}})\) given by (A.6)) to vectors of the form \(e_0^{(m)} \otimes v\), \(m=1,2,\ldots \).

Observe that \(e_0^{(m)}{\widehat{B}}(x,r){\widehat{B}}(x',r')=0\). Therefore, any nonzero summand in (A.4) must have \(|I| \le 1\). Moreover, when \(I=\{i\}\) has one element, any such nonzero contribution to the coefficient of \(e_{{\mathcal {S}}(\lambda )}\) should have \(i\in {\mathcal {S}}(\lambda )=\{\lambda _N+1, \lambda _{N-1}+2, \ldots , \lambda _1+N \}\). Therefore, each step of the repeated application of (A.4) for which we choose \(|I| = 1\) corresponds to a number from 1 to N (indicating which element of \({\mathcal {S}}(\lambda )\) is selected), and these numbers must be distinct. We encode this information by a permutation \(\tau \in {\mathfrak {S}}_N\). Using the facts that

$$\begin{aligned} \begin{aligned} c_{ \varnothing }({\textbf{x}};{\textbf{r}})&= 1, \qquad \quad c_{ \{k \} }({\textbf{x}};{\textbf{r}})=(-1)^{N-k} \prod _{i=1}^{k-1} \frac{r_i^{-2}x_i-x_k}{x_i-x_k} \prod _{j=k+1}^{N}\frac{r_k^{-2}x_k-x_j}{x_k-x_j},\\ e_0^{(m)} {\widehat{A}}(x,r)&= e_0^{(m)},\qquad e_0^{(m)} {\widehat{B}}(x,r)=\frac{x(1-r^2)}{r^2(y_m-x)}\,e_1^{(m)},\\ e_0^{(m)} {\widehat{D}}(x,r)&=\frac{y_m-s_m^2x}{s_m^2(y_m-x)}\,e_0^{(m)}, \end{aligned} \end{aligned}$$

we see that the coefficient of \(e_{{\mathcal {S}}(\lambda )}\) in \(e_{\varnothing }{\widehat{B}}(x_N,r_N)\ldots {\widehat{B}}(x_1,r_1) \) is equal to

$$\begin{aligned} \prod _{1\le i<j\le N}\frac{r_i^{-2}x_i-x_j}{x_i-x_j} \sum _{\tau \in {\mathfrak {S}}_N} \mathop {\textrm{sgn}}(\tau ) \prod _{k=1}^{N} \left( \frac{x_{\tau (k)}(r_{\tau (k)}^{-2}-1)}{y_{\lambda _{k}+N-k+1}-x_{\tau (k)}} \prod _{m=1}^{\lambda _k+N-k} \frac{y_m-s_m^2x_{\tau (k)}}{s_m^2(y_m-x_{\tau (k)})} \right) . \end{aligned}$$

Note that the prefactor \(\prod _{1\le i<j\le N}\frac{r_i^{-2}x_i-x_j}{x_i-x_j}\) arises by taking the product of the \(c_{ \{k \} }\)’s over all \(k=1,\ldots ,N\), but in this product for each next term the number N of variables decreases by one. Therefore, we end up with a product over \(i<j\) instead of over all pairs \(i\ne j\). This completes the proof of Theorem 3.9.

1.2 Proof of Theorem 3.10

1.2.1 Recalling the notation

Throughout this subsection we fix \(M,N\ge 1\), a signature \(\lambda =(\lambda _1\ge \ldots \ge \lambda _N \ge 0)\) with N parts, and sequences of complex parameters

$$\begin{aligned} {\textbf{x}}=(x_1,\ldots ,x_M ),\qquad {\textbf{y}}=(y_1,y_2,\ldots ),\qquad {\textbf{r}}=(r_1,\ldots ,r_M ),\qquad {\textbf{s}}=(s_1,s_2,\ldots ). \end{aligned}$$

Recall from Definition 3.2 the function \(G_\lambda ({\textbf{x}};{\textbf{y}};{\textbf{r}};{\textbf{s}}) =G_{\lambda /0^N}({\textbf{x}};{\textbf{y}};{\textbf{r}};{\textbf{s}})\) which is the partition function of the free fermion six vertex model with weights W (2.3) and with boundary conditions determined by \(\lambda \).

Our aim is to prove Theorem 3.10 which gives an explicit formula for \(G_\lambda \) (3.14) in terms of a sum over a pair of permutations. The argument is longer than in the case of \(F_\lambda \) from Appendix A.1 but also involves manipulations with row operators. Namely, we utilize the operators ABCD given by (2.8)–(2.9). They are built from the vertex weights W and depend on xr and the sequences \({\textbf{y}}, {\textbf{s}}\). These operators act (from the left) on tensor products of two-dimensional spaces \(V^{(k)}=\mathop {\textrm{span}}\{ e_0^{(k)},e_1^{(k)} \} \simeq {\mathbb {C}}^2\), where \(k\ge 1\). Recall (Sect. 3.1) that to \(\lambda \) we associate the vector \(e_{{\mathcal {S}}(\lambda )}\) in the finitary subspace \({\mathscr {V}}\) of the infinite tensor product \(V^{(1)}\otimes V^{(2)}\otimes \ldots \), where we take \(e^{(k)}_1\) in the k-th place if and only if \(k\in {\mathcal {S}}(\lambda )\) and \(e_0^{(k)}\) otherwise, see Sect. 3.1. Let us also set

$$\begin{aligned} e_{[1,N]} = e^{(1)}_1\otimes \ldots \otimes e^{(N)}_1\otimes e^{(N+1)}_0 \otimes e^{(N+2)}_0\otimes \ldots . \end{aligned}$$
(A.7)

Equip all tensor products of the spaces \(V^{(k)}\) with the inner product defined by \(\langle e_{{\mathcal {T}}},e_{{\mathcal {T}}'} \rangle = {\textbf{1}}_{{\mathcal {T}}={\mathcal {T}}'}\) (here we use the notation \(e_{{\mathcal {T}}}\) as in (3.2)). Then by Proposition 3.4 we have

$$\begin{aligned} G_{\lambda }({\textbf{x}};{\textbf{y}};{\textbf{r}};{\textbf{s}})= \left\langle e_{{\mathcal {S}}(\lambda )}, D(x_M,r_M)\ldots D(x_2,r_2)D(x_1,r_1)e_{[1,N]} \right\rangle . \end{aligned}$$

We will compute the above coefficient of \(e_{{\mathcal {S}}(\lambda )}\) in the action of the product of the D operators using the Yang–Baxter equation stated in Proposition 2.4 as a series of commutation relations between the operators ABC, and D.

Remark A.5

Sometimes, to shorten some formulas in the proofs, we will use notation \(A_i,B_i,C_i\), or \(D_i\) for \(A(x_i,r_i),B(x_i,r_i),C(x_i,r_i)\), and \(D(x_i,r_i)\), respectively.

1.2.2 Action of D operators on a two-fold tensor product

The next two statements, Lemmas A.6 and A.7, are parallel to the computations with the row operators performed in Appendix A.1.2 in the proof of the formula for \(F_\lambda \).

Lemma A.6

Let \(\sigma \in {\mathfrak {S}}_M\) be a permutation. Then

$$\begin{aligned}{} & {} C(x_{\sigma (M)},r_{\sigma (M)})\ldots C(x_{\sigma (1)},r_{\sigma (1)})\\{} & {} \quad = C(x_M,r_M)\ldots C(x_1,r_1)\prod _{\begin{array}{c} 1\le i<j\le M\\ \sigma (j)<\sigma (i) \end{array}} \frac{r_{\sigma (j)}^{-2}x_{\sigma (j)}-x_{\sigma (i)}}{r_{\sigma (i)}^{-2}x_{\sigma (i)}-x_{\sigma (j)}}. \end{aligned}$$

Proof

This is proven by induction on the length of the permutation \(\sigma \) using the commutation relation (2.11) between the C operators. \(\square \)

Lemma A.7

As operators on a tensor product of two spaces \(V_1\otimes V_2\), we have

$$\begin{aligned} \begin{aligned}&D(x_M,r_M)D(x_{M-1},r_{M-1})\ldots D(x_1,r_1)\\&\quad = \sum _{{\mathcal {I}}\subseteq \left\{ 1,\ldots ,M \right\} } \Biggl ( \prod _{i\in {\mathcal {I}},\, j\notin {\mathcal {I}}} \frac{r_i^{-2}x_i-x_j}{r_i^{-2}x_i-r_j^{-2}x_j} \Biggr ) B(x_{i_k},r_{i_k})\ldots B(x_{i_1},r_{i_1})\\&\qquad D(x_{j_{M-k}},r_{j_{M-k}})\ldots D(x_{j_1},r_{j_1}) \\&\qquad \otimes D(x_{j_{M-k}},r_{j_{M-k}})\ldots D(x_{j_1},r_{j_1}) C(x_{i_k},r_{i_k})\ldots C(x_{i_1},r_{i_1}). \end{aligned}\nonumber \\ \end{aligned}$$
(A.8)

Here \({\mathcal {I}}=(i_1<\ldots <i_k )\) and \({\mathcal {J}}=\left\{ 1,\ldots ,M \right\} {\setminus } {\mathcal {I}}= (j_1<\ldots <j_{M-k} )\).

Proof

In the proof we use the shorthand notation for the operators from Remark A.5. By the last identity in (2.9), the action of \(D_M\ldots D_1\) on \(V_1\otimes V_2\) is given by

$$\begin{aligned} \sum _{{\mathcal {K}}\subseteq \{1,\ldots ,M \}} X_{{\mathcal {K}}}\otimes Y_{{\mathcal {K}}}, \end{aligned}$$
(A.9)

where \(X_{{\mathcal {K}}}=X_M({\mathcal {K}})\ldots X_1({\mathcal {K}}) \), \(Y_{{\mathcal {K}}}=Y_M({\mathcal {K}})\ldots Y_1({\mathcal {K}}) \), with

$$\begin{aligned} X_i({\mathcal {K}})={\left\{ \begin{array}{ll} B_i,&{} \quad i\in {\mathcal {K}};\\ D_i,&{} \quad i\notin {\mathcal {K}}, \end{array}\right. } \qquad Y_i({\mathcal {K}})={\left\{ \begin{array}{ll} C_i,&{} \quad i\in {\mathcal {K}};\\ D_i,&{} \quad i\notin {\mathcal {K}}. \end{array}\right. } \end{aligned}$$

Next, by repeated use of relations (2.15) and (2.17), the sum (A.9) can be expressed in the form

$$\begin{aligned} \sum _{I,I'\subseteq \left\{ 1,\ldots ,M \right\} } h_{I;I'}({\textbf{x}};{\textbf{r}})\, B_{i_k}\ldots B_{i_1}D_{j_{M-k}}\ldots D_{j_1} \otimes D_{i_m'}\ldots D_{i_1'} C_{j'_{M-m}}\ldots C_{j_1'},\nonumber \\ \end{aligned}$$
(A.10)

where \(h_{I;I'}({\textbf{x}};{\textbf{r}})\) are rational functions in \({\textbf{x}}=(x_1,\ldots ,x_M )\) and \({\textbf{r}}=(r_1,\ldots ,r_M )\), and the indices are

$$\begin{aligned} \begin{aligned} I=(i_1<\ldots<i_k ), \qquad I'&=(i_1'<\ldots<i_m' ),\\ J=I^c=(j_1<\ldots<j_{M-k} ), \qquad J'&=(I')^c=(j_1'<\ldots <j_{M-m}' ). \end{aligned}\nonumber \\ \end{aligned}$$

By looking at relations (2.15), (2.17) closer, one can already see that \(m=M-k\) in (A.10). By Remark A.1, the coefficients \(h_{I;I'}({\textbf{x}};{\textbf{r}})\) are independent of the order in which we apply the commutation relations between the operators ABCD to get from (A.9) to (A.10).

By the same argument as in Lemma A.2, one can show that \(h_{I;I'}({\textbf{x}};{\textbf{r}})=0\) if \(I\cap I'\ne \varnothing \) or \(J\cap J'\ne \varnothing \). Thus, it must be that \(I=J'\) and \(J=I'\), and we may rewrite \(h_I({\textbf{x}};{\textbf{r}})=h_{I;I'}({\textbf{x}};{\textbf{r}})\). This implies that we may write (A.10) as

$$\begin{aligned} \sum _{I\cup J= \{1,\ldots ,M \}} h_{I}({\textbf{x}};{\textbf{r}})\, B_{i_k}\ldots B_{i_1}D_{j_{M-k}}\ldots D_{j_1} \otimes D_{j_{M-k}}\ldots D_{j_1} C_{i_k}\ldots C_{i_1}.\nonumber \\ \end{aligned}$$
(A.11)

It remains to evaluate the coefficients \(h_I({\textbf{x}};{\textbf{r}})\) in (A.11). This is simpler than for the case of \(F_\lambda \) considered in Appendix A.1.2. First, assume that \(I=I_k:=\left\{ 1,2,\ldots ,k \right\} \). In this case, applying (2.15) and (2.17) to a term \(X_{{\mathcal {K}}} \otimes Y_{{\mathcal {K}}}\) in (A.9) only gives rise to a nonzero multiple of \(B_{k} \cdots B_{1} D_{M} \cdots D_{k+1} \otimes D_{M} \cdots D_{k+1} C_{k} \cdots C_{1}\) as a summand only if \({\mathcal {K}} = I_k\). Indeed, otherwise let \(k_0=\min {\mathcal {K}}^c\le k\). In any expression of \(X_{{\mathcal {K}}} \otimes Y_{{\mathcal {K}}}\) as a linear combination of \(B_{i_k} \cdots B_{i_1} D_{j_{M - k}} \cdots D_{j_1} \otimes D_{j_{M - k}} \cdots D_{j_1} C_{i_k} \cdots C_{i_1}\), one needs to commute \(D_{k_0}\) to the right through \(X_{k_0-1},\ldots ,X_1 \), which implies that \(j_1\le k_0\le k\). Therefore, it must be \({\mathcal {K}}=I_k\).

For \({\mathcal {K}}=I_k\), the only way of obtaining \(B_{k} \cdots B_{1} D_{M} \cdots D_{k+1} \otimes D_{M} \cdots D_{k+1} C_{k} \cdots C_{1}\) from \(X_{{\mathcal {K}}} \otimes Y_{{\mathcal {K}}}\) is through using (2.15) to commute each \(D_j\) to the right of each \(B_i\). This produces a factor of \((r_i^{-2}x_i-x_j)/(r_i^{-2}x_i-r_j^{-2}x_j)\) for each such commutation, and so

$$\begin{aligned} h_{I_k}({\textbf{x}};{\textbf{r}})= \prod _{i\in I_k,\,j\notin I_k} \frac{r_i^{-2}x_i-x_j}{r_i^{-2}x_i-r_j^{-2}x_j}.\nonumber \\ \end{aligned}$$

Finally, to get \(h_I\) for general I, observe that the operators \(D_i\) commute by (2.13), and therefore \(h_{\sigma (I_k)}({\textbf{x}};{\textbf{r}})=h_{I_k}(\sigma ({\textbf{x}});\sigma ({\textbf{r}}))\), which are precisely the coefficients in the claimed identity in the present lemma, where \(\sigma \) takes \(I_k\) to an arbitrary I. This completes the proof. \(\square \)

For the next proposition, recall the notation \(d = d(\lambda ) \ge 0\) which is the integer such that \(\lambda _d \ge d\) and \(\lambda _{d + 1} < d + 1\), and \(\mu = (\mu _1< \mu _2< \ldots < \mu _d) = \{1,\ldots ,N \} {\setminus } \big ( {\mathcal {S}}(\lambda ) \cap \{1,\ldots ,N \} \big )\). Also consider the N-fold tensor product \(V^{(1)}\otimes \ldots \otimes V^{(N)}\), and take the following vectors in this space

$$\begin{aligned} e_{{\mathcal {S}}_N(\lambda )}:=e^{(1)}_{m_1}\otimes e^{(2)}_{m_2}\otimes \ldots \otimes e^{(N)}_{m_N}, \qquad e_{[1,N]}=e^{(1)}_{1}\otimes e^{(2)}_{1}\otimes \ldots \otimes e^{(N)}_{1},\nonumber \\ \end{aligned}$$
(A.12)

where \(m_i={\textbf{1}}_{i\in {\mathcal {S}}(\lambda )}\), and with \(e_{[1,N]}\) we are slightly abusing the notation, cf. (A.7).

Proposition A.8

With the above notation, for any vectors \(v_1,v_2\in V^{(N+1)}\otimes V^{(N+2)}\otimes \ldots \) we have

$$\begin{aligned} \begin{aligned}&\left\langle e_{{\mathcal {S}}_N(\lambda )}\otimes v_2, D(x_M,r_M)\ldots D(x_2,r_2)D(x_1,r_1) (e_{[1,N]}\otimes v_1)\right\rangle \\&\quad = \prod _{j=1}^{M}\prod _{k=1}^{N} \frac{y_k-s_k^2r_j^{-2}x_j}{y_k-s_k^2 x_j}\\&\qquad \sum _{\begin{array}{c} {\mathcal {I}}\subseteq \left\{ 1,\ldots ,M \right\} \\ |{\mathcal {I}}|=d \end{array}} \left\langle v_2, \biggl ( \prod _{j\notin {\mathcal {I}}}D(x_j,r_j) \biggr ) C(x_{i_d},r_{i_d}) \ldots C(x_{i_1},r_{i_1}) v_1 \right\rangle \\&\qquad \times \prod _{i \in {\mathcal {I}},\, j\notin {\mathcal {I}}} \frac{r_i^{-2}x_i-x_j}{r_i^{-2}x_i-r_j^{-2}x_j} \prod _{i,j \in {\mathcal {I}},\,i<j} \frac{r_i^{-2}x_i-x_j}{r_i^{-2}x_i-r_j^{-2}x_j}\\&\qquad \times \sum _{\sigma \in {\mathfrak {S}}_d} \mathop {\textrm{sgn}}(\sigma ) \prod _{j=1}^d \biggl ( \frac{s_{\mu _j}^2 x_{i_{\sigma (j)}} \big ( r^{-2}_{i_{\sigma (j)}} - 1\big )}{y_{\mu _j} - s_{\mu _j}^2 r^{-2}_{i_{\sigma (j)}} x_{i_{\sigma (j)}}} \prod _{k = \mu _j + 1}^N \frac{s_k^2 \big (r^{-2}_{i_{\sigma (j)}} x_{i_{\sigma (j)}} - y_k \big )}{y_k - s_k^2 r^{-2}_{i_{\sigma (j)}} x_{i_{\sigma (j)}}} \biggr ), \end{aligned}\nonumber \\ \end{aligned}$$
(A.13)

where \({\mathcal {I}}=(i_1<\ldots <i_d )\).

The right-hand side of (A.13) vanishes if \(d(\lambda )>M\). Observe that the same is true for the left-hand side. Indeed, a single D operator moves at most one vertical arrow somewhere to the right, and d is the number of gaps (sites with no vertical arrows) among \(\left\{ 1,\ldots ,N \right\} \) in the configuration encoded by \(e_{{\mathcal {S}}_N(\lambda )}\), so d should not be larger than M.

Proof of Proposition A.8

In this proof we use the shorthand notation for the operators, see Remark A.5. As a first step, we consider how the action of the product of the D and C operators like in the right-hand side of (A.13) acts on tensor products. Fix an integer \(n>0\), a subset \({\mathcal {H}}=\left\{ h_1,h_2,\ldots ,h_k \right\} \subseteq \left\{ 1,\ldots ,M \right\} \), and \(u,w\in V^{(n+1)}\otimes V^{(n+2)}\otimes \ldots \). Then we have

$$\begin{aligned} \begin{aligned}&\Bigg \langle e_1^{(n)} \otimes w, \bigg ( \prod _{j \notin {\mathcal {H}}} D_j \bigg ) C_{h_k} \cdots C_{h_1} \big ( e_1^{(n)} \otimes u \big ) \Bigg \rangle \\&= \Bigg \langle e_1^{(n)}, \bigg ( \prod _{j \notin {\mathcal {H}}} D_j \bigg ) A_{h_k} \cdots A_{h_1} e_1^{(n)} \Bigg \rangle \Bigg \langle w, \bigg ( \prod _{j \notin {\mathcal {H}}} D_j \bigg ) C_{h_k} \cdots C_{h_1} u \Bigg \rangle . \end{aligned}\nonumber \\ \end{aligned}$$
(A.14)

and

$$\begin{aligned} \begin{aligned}&\Bigg \langle e_0^{(n)} \otimes w, \bigg ( \prod _{j \notin {\mathcal {H}}} D_j \bigg ) C_{h_k} \cdots C_{h_1} \big ( e_1^{(n)} \otimes u \big ) \Bigg \rangle \\&\quad = \sum _{i \notin {\mathcal {H}}} \Bigg \langle e_0^{(n)}, B_i \bigg ( \prod _{j \notin {\mathcal {H}} \cup \{ i \}} D_j \bigg ) A_{h_k} \cdots A_{h_1} e_1^{(n)} \Bigg \rangle \\&\qquad \Bigg \langle w, \bigg ( \prod _{j \notin {\mathcal {H}} \cup \{ i \}} D_j \bigg ) C_i C_{h_k} \cdots C_{h_1} u \Bigg \rangle \\&\qquad \times \prod _{j \notin {\mathcal {H}} \cup \{ i \} } \frac{r^{-2}_i x_i - x_j}{r^{-2}_i x_i - r^{-2}_j x_j}. \end{aligned}\nonumber \\ \end{aligned}$$
(A.15)

Indeed, observe that C(xr) maps \(e_1^{(n)}\) to 0, so by the third statement in (2.9) we have

$$\begin{aligned} C_{h_k}\ldots C_{h_1}(e_1^{(n)}\otimes u)= A_{h_k}\ldots A_{h_1} e_1^{(n)}\otimes C_{h_k}\ldots C_{h_1}u. \end{aligned}$$

When applying a product of the \(D_j\)’s to this vector, a nonzero term with \(e_1^{(n)}\) in the first tensor factor may appear only if we act each time by the operators D on both tensor factors, see the fourth statement in (2.9). This (together with the fact that \(\langle \cdot ,\cdot \rangle \) is multiplicative with respect to the tensor product) leads to (A.14). For (A.15), we use Lemma A.7 expressing the action of a product of the \(D_j\)’s on a tensor product, and observe that a nonzero term with \(e_0^{(n)}\) in the first tensor factor may appear only if \(|{\mathcal {I}}| =1\) in the right-hand side of (A.8).

The action of all the operators on \(e_1^{(n)}\) in the right-hand sides of (A.14)–(A.15) is explicit by (2.8) and (2.3):

$$\begin{aligned} \begin{aligned}&\Bigg \langle e_1^{(n)}, \bigg ( \displaystyle \prod _{j \notin {\mathcal {H}}} D_j \bigg ) A_{h_k} \cdots A_{h_1} e_1^{(n)} \Bigg \rangle = \prod _{k\in {\mathcal {H}}} \frac{s_n^2(x_k-r_k^2 y_n)}{r_k^2(y_n-s_n^2x_k)} \prod _{j\notin {\mathcal {H}}} \frac{y_n-s_n^2r_j^{-2} x_j}{y_n-s_n^2x_j};\\&\Bigg \langle e_0^{(n)}, B_i \bigg ( \displaystyle \prod _{j \notin {\mathcal {H}} \cup \{ i \}} D_j \bigg ) A_{h_k} \cdots A_{h_1} e_1^{(n)} \Bigg \rangle \\&\quad = \frac{s_n^2x_i(r_i^{-2}-1)}{y_n-s_n^2x_i} \prod _{k\in {\mathcal {H}}} \frac{s_n^2(x_k-r_k^2 y_n)}{r_k^2(y_n-s_n^2x_k)} \prod _{j\notin {\mathcal {H}} \cup \left\{ i \right\} } \frac{y_n-s_n^2r_j^{-2} x_j}{y_n-s_n^2x_j}. \end{aligned}\nonumber \\ \end{aligned}$$

This means that we can continue our identities as

$$\begin{aligned} \begin{aligned} (A.14)&= \Bigg \langle w, \bigg ( \displaystyle \prod _{j \notin {\mathcal {H}}} D_j \bigg ) C_{h_k} \cdots C_{h_1} u \Bigg \rangle \prod _{k\in {\mathcal {H}}} \frac{s_n^2(x_k-r_k^2 y_n)}{r_k^2(y_n-s_n^2x_k)} \prod _{j\notin {\mathcal {H}}} \frac{y_n-s_n^2r_j^{-2} x_j}{y_n-s_n^2x_j};\\ (A.15)&= \sum _{i \notin {\mathcal {H}}} \Bigg \langle w, \bigg ( \prod _{j \notin {\mathcal {H}} \cup \{ i \}} D_j \bigg ) C_i C_{h_k} \cdots C_{h_1} u \Bigg \rangle \frac{s_n^2x_i(r_i^{-2}-1)}{y_n-s_n^2x_i}\\&\quad \prod _{k\in {\mathcal {H}}} \frac{s_n^2(x_k-r_k^2 y_n)}{r_k^2(y_n-s_n^2x_k)}\\&\quad \times \prod _{j\notin {\mathcal {H}} \cup \left\{ i \right\} } \frac{y_n-s_n^2r_j^{-2} x_j}{y_n-s_n^2x_j} \prod _{j \notin {\mathcal {H}} \cup \{ i \} } \frac{r^{-2}_i x_i - x_j}{r^{-2}_i x_i - r^{-2}_j x_j}. \end{aligned}\nonumber \\ \end{aligned}$$
(A.16)

Now we can evaluate

$$\begin{aligned} \left\langle e_{{\mathcal {S}}_N(\lambda )}\otimes v_2, D(x_M,r_M)\ldots D(x_2,r_2)D(x_1,r_1) (e_{[1,N]}\otimes v_1)\right\rangle \end{aligned}$$

by repeatedly using (A.16). Start with \({\mathcal {H}}=\varnothing \), and apply the first identity in (A.16) for each \(n\notin \mu =\left\{ 1,\ldots ,N \right\} {\setminus } {\mathcal {S}}(\lambda )\), and the second identity in (A.16) for each \(n\in \mu \). Each application of the latter involves choosing an index \(i\notin {\mathcal {H}}\). This freedom is encoded by the data \(({\mathcal {I}},\sigma )\), where \({\mathcal {I}}=\{i_1<i_2< \ldots < i_d \}\subseteq \left\{ 1,\ldots ,M \right\} \) and \(\sigma \in {\mathfrak {S}}_d\), such that at each step when \(n=\mu _k\in \mu \) we remove the index \(i_{\sigma (k)}\). For each fixed \(({\mathcal {I}},\sigma )\) we have the following factors in the resulting expansion:

  • The inner product term \(\displaystyle \Bigl \langle v_2, \Bigl ( \prod \limits _{j\notin {\mathcal {I}}} D_j\Bigr ) C_{i_d}\ldots C_{i_1} v_1\Bigr \rangle \prod \limits _{1\le \upalpha<\upbeta \le d:\sigma (\upbeta )<\sigma (\upalpha )} \frac{r_{i_{\sigma (\upbeta )}}^{-2} x_{i_{\sigma (\upbeta )}}- x_{i_{\sigma (\upalpha )}}}{r_{i_{\sigma (\upalpha )}}^{-2} x_{i_{\sigma (\upalpha )}}-x_{i_{\sigma (\upbeta )}}} \), where the last factor comes from reordering the C operators thanks to Lemma A.6.

  • The factor \(\displaystyle \prod \limits _{i\in {\mathcal {I}},\, j\notin {\mathcal {I}}} \frac{r^{-2}_i x_i-x_j}{r^{-2}_i x_i - r^{-2}_j x_j} \prod \limits _{1\le \upalpha <\upbeta \le d} \frac{r^{-2}_{i_{\sigma (\upalpha )}} x_{i_{\sigma (\upalpha )}}-x_{i_{\sigma (\upbeta )}}}{r^{-2}_{i_{\sigma (\upalpha )}} x_{i_{\sigma (\upalpha )}} - r^{-2}_{i_{\sigma (\upbeta )}} x_{i_{\sigma (\upbeta )}}}\) arises by applying the second identity in (A.16) for each \(n\in \mu \). Reordering the denominator in the second factor gives

    $$\begin{aligned} \prod \nolimits _{1\le \upalpha<\upbeta \le d} \frac{1}{r^{-2}_{i_{\sigma (\upalpha )}} x_{i_{\sigma (\upalpha )}} - r^{-2}_{i_{\sigma (\upbeta )}} x_{i_{\sigma (\upbeta )}}} =\mathop {\textrm{sgn}}(\sigma ) \prod \nolimits _{i,j\in {\mathcal {I}},\, i<j} \frac{1}{r_i^{-2}x_i-r_j^{-2}x_j}. \end{aligned}$$
  • The product \(\displaystyle \prod \nolimits _{j=1}^d \frac{s_{\mu _j}^2 x_{i_{\sigma (j)}}(r_{i_{\sigma (j)}}^{-2}-1)}{y_{\mu _j}-s_{\mu _j}^2x_{i_{\sigma (j)}}}\) is composed of one factor per each application of the second identity in (A.16) corresponding to \(n=\mu _j\in \mu \).

  • The product \(\displaystyle \prod \nolimits _{j=1}^{d}\prod \nolimits _{n=\mu _j+1}^N \frac{s_n^2(x_{i_{\sigma (j)}}-r_{i_{\sigma (j)}}^2 y_n)}{r_{i_{\sigma (j)}}^2(y_n-s_n^2x_{i_{\sigma (j)}})} \) arises from both identities in (A.16) which contain the same products over \(k\in {\mathcal {H}}\).

  • Finally, the product \(\displaystyle \biggl ( \prod \nolimits _{n=1}^N\prod \nolimits _{j=1}^{M} \frac{y_n-s_n^2r_j^{-2} x_j}{y_n-s_n^2x_j} \biggr ) \biggl ( \prod \nolimits _{j=1}^d \prod \nolimits _{n=\mu _j}^{N} \frac{y_n-s_n^2x_{i_{\sigma (j)}}}{y_n-s_n^2r_{i_{\sigma (j)}}^{-2} x_{i_{\sigma (j)}}} \biggr ) \) arises from the products over \(j\notin {\mathcal {H}}\) or \(j\notin {\mathcal {H}}\cup \left\{ i \right\} \) in (A.16).

Combining all the terms yields the desired identity.

1.2.3 Commutation of the operators C and D

In this subsection we establish one of the key formulas concerning the commutation of the operators C and D. We fix \(M,N\ge 1\) and sequences of complex numbers

$$\begin{aligned} {\textbf{x}}=(x_1,\ldots ,x_N ),\qquad {\textbf{r}}=(r_1,\ldots ,r_N ),\qquad {\textbf{w}}=(w_1,\ldots ,w_M ),\qquad \varvec{\uptheta }=(\theta _1,\ldots ,\theta _M ). \end{aligned}$$

Proposition A.9

We have

$$\begin{aligned} \begin{aligned}&D(x_N,r_N)\ldots D(x_1,r_1) C(w_M,\theta _M)\ldots C(w_1,\theta _1) \\&\quad = \sum _{\begin{array}{c} {\mathcal {I}}\subseteq \left\{ 1,\ldots ,N \right\} \\ {\mathcal {H}}\subseteq \left\{ 1,\ldots ,M \right\} \\ |{\mathcal {I}}|+|{\mathcal {H}}|=M \end{array}} C(x_{i_k},r_{i_k})\ldots C(x_{i_1},r_{i_1})\, C(w_{h_{M-k}},\theta _{h_{M-k}})\ldots C(w_{h_1},\theta _{h_1}) \\&\qquad \prod _{j\notin {\mathcal {H}}}D(w_j,\theta _j) \prod _{j\notin {\mathcal {I}}}D(x_j,r_j)\\&\qquad \times \prod _{i\in {\mathcal {I}}}(1-r_i^{-2})x_i \prod _{i\in {\mathcal {I}},\, j\notin {\mathcal {I}}}\frac{r_j^{-2}x_j-x_i}{x_j-x_i} \prod _{h\in {\mathcal {H}},\, j\notin {\mathcal {H}}}\frac{1}{w_j-w_h}\\&\qquad \prod _{h\in {\mathcal {H}},\, j\notin {\mathcal {I}}}\frac{r_j^{-2}x_j-w_h}{x_j-w_h} \prod _{i\in {\mathcal {I}},\, j\notin {\mathcal {H}}}\frac{1}{x_i-w_j}\\&\qquad \times \prod _{i,j\in {\mathcal {I}},\, i<j}(r_i^{-2}x_i-x_j) \prod _{i,h\in {\mathcal {H}},\, h<i}\frac{1}{\theta _i^{-2}w_i-w_h} \prod _{1\le i<j\le M}(\theta _j^{-2}w_j-w_i). \end{aligned}\nonumber \\ \end{aligned}$$
(A.17)

Here \({\mathcal {I}}=(i_1<\ldots <i_k )\) and \({\mathcal {H}}=(h_1<\ldots <h_{M-k} )\).

Recall that the operators \(D(x_j,r_j)\) commute by (2.13), so we can write their products in any order. This is not the case for the operators \(C(w_j,\theta _j)\), which is why their order in (A.17) must be specified explicitly.

The rest of this subsection is devoted to the proof of Proposition A.9. As a first step, let us establish the claim for \(M=1\):

Lemma A.10

(Proposition A.9 for \(M=1\)) We have

$$\begin{aligned} \begin{aligned}&D(x_N,r_N)\ldots D(x_1,r_1) C(w,\theta ) = C(w,\theta ) D(x_1,r_1)\ldots D(x_N,r_N) \prod _{j=1}^{N}\frac{r_j^{-2}x_j-w}{x_j-w}\\&\quad + \sum _{i=1}^{N} \biggl ( C(x_i,r_i)D(w,\theta )\prod _{j\ne i}D(x_j,r_j) \biggr ) \frac{(1-r_i^{-2})x_i}{x_i-w}\prod _{j\ne i}\frac{r_j^{-2}x_j-x_i}{x_j-x_i}. \end{aligned}\nonumber \\ \end{aligned}$$
(A.18)

Proof

The first term containing \(C(w,\theta ) D(x_1,r_1)\ldots D(x_N,r_N)\) may only arise if we are picking the first summand in (2.16) for each commutation. This produces the desired product \(\prod _{j=1}^{N}\frac{r_j^{-2}x_j-w}{x_j-w}\) as a prefactor.

Now let us explain how to get the summand in the second sum corresponding to \(i=1\). Thanks to the commutativity of the \(D(x_j,r_j)\)’s, the form of the other summands then would follow. To get the term containing \(C(x_1,r_1)D(w,\theta )D(x_2,r_2)\ldots D(x_N,r_N)\), we must pick the second summand in (2.16) once, when moving \(C(w,\theta )\) to the left of \(D(x_1,r_1)\). This produces \(C(x_1,r_1)D(w,\theta )\frac{(1-r_1^{-2})x_1}{x_1-w}\). After that, we move \(C(x_1,r_1)\) to the left of all the other \(D(x_j,r_j)\)’s, always picking the first summand in (2.16). This produces the desired identity.

We now consider the general case \(M,N\ge 1\) of (A.17). First, repeatedly using relations (2.11), (2.13), and (2.16), we have

$$\begin{aligned} \begin{aligned}&D(x_N,r_N)\ldots D(x_1,r_1) C(w_M,\theta _M)\ldots C(w_1,\theta _1)\\&\quad = \sum _{{\mathcal {I}},{\mathcal {H}}} C(x_{i_k},r_{i_k})\ldots C(x_{i_1},r_{i_1})\, C(w_{h_{M-k}},\theta _{h_{M-k}})\ldots C(w_{h_1},\theta _{h_1})\\&\qquad \times \prod _{j\notin {\mathcal {H}}}D(w_j,\theta _j) \prod _{j\notin {\mathcal {I}}}D(x_j,r_j) R_{{\mathcal {I}};{\mathcal {H}}}({\textbf{w}};{\textbf{x}};\varvec{\uptheta };{\textbf{r}}), \end{aligned}\nonumber \\ \end{aligned}$$
(A.19)

where the sum is taken over \({\mathcal {I}}\subseteq \left\{ 1,\ldots ,N \right\} \) and \({\mathcal {H}}\subseteq \left\{ 1,\ldots ,M \right\} \), such that \(|{\mathcal {I}}| =k\), \(|{\mathcal {H}}| =M-k\), and k is arbitrary (see (A.17)). Here \(R_{{\mathcal {I}};{\mathcal {H}}}\) are some rational functions which we will now evaluate.

Lemma A.11

(Evaluation of \(R_{{\mathcal {I}};{\mathcal {H}}}\) in a special case) Let \({\mathcal {H}}=\{1,2,\ldots ,M-k \}\), and \({\mathcal {I}}=(i_1<\ldots <i_k )\subseteq \left\{ 1,\ldots ,N \right\} \) with \(|{\mathcal {I}}| =k\) be arbitrary. Then

$$\begin{aligned} \begin{aligned}&R_{{\mathcal {I}};{\mathcal {H}}}({\textbf{w}};{\textbf{x}};\varvec{\uptheta };{\textbf{r}}) = \prod _{i\in {\mathcal {I}}}(1-r_i^{-2})x_i \prod _{i\in {\mathcal {I}},\, j\notin {\mathcal {I}}}\frac{r_j^{-2}x_j-x_i}{x_j-x_i} \\&\qquad \prod _{h\in {\mathcal {H}},\, j\notin {\mathcal {H}}}\frac{1}{w_j-w_h} \prod _{h\in {\mathcal {H}},\, j\notin {\mathcal {I}}}\frac{r_j^{-2}x_j-w_h}{x_j-w_h}\\&\qquad \times \prod _{i\in {\mathcal {I}},\, j\notin {\mathcal {H}}}\frac{1}{x_i-w_j} \prod _{i,j\in {\mathcal {I}},\, i<j}(r_i^{-2}x_i-x_j)\\&\qquad \prod _{i,h\in {\mathcal {H}},\, h<i}\frac{1}{\theta _i^{-2}w_i-w_h} \prod _{1\le i<j\le M}(\theta _j^{-2}w_j-w_i). \end{aligned}\nonumber \\ \end{aligned}$$
(A.20)

Proof

From the left-hand side of (A.19), we apply (2.16) (together with permutation relations (2.11), (2.13) for the operators CD) to move all the operators C to the left of all the operators D. The operator

$$\begin{aligned}{} & {} C(x_{i_k},r_{i_k})\ldots C(x_{i_1},r_{i_1}) C(w_{M-k},\theta _{M-k})\ldots C(w_1,\theta _1)\\{} & {} \qquad \prod _{j=M-k+1}^{M}D(w_j,\theta _j) \prod _{j\notin {\mathcal {I}}}D(x_j,r_j) \end{aligned}$$

may arise, after a sequence of applications of Lemma A.10, only if there exists a permutation \(\sigma \in {\mathfrak {S}}_k\) such that the following two conditions are met:

  • When moving each \(C(w_{M-k+j},\theta _{M-k+j})\), \(1\le j\le k\), to the left, turn \((w_{M-k+j},\theta _{M-k+j})\) into \((x_{i_{\sigma (j)}},r_{i_{\sigma (j)}})\). This corresponds to picking the second summand in (2.16), and this swapping of parameters may happen only once per each C operator.

  • When moving each \(C(w_j,\theta _j)\), \(1\le j\le M-k\), to the left, we always pick the first summand in (2.16), and the parameters \((w_j,\theta _j)\) stay the same throughout the exchanges.

To be able to put all the coefficients together, denote \(\sigma _t ({\mathcal {I}}) = \big ( i_{\sigma (t)}, i_{\sigma (t + 1)}, \ldots , i_{\sigma (k)} \big )\) for each \(1\le t\le k\). Then, for each integer \(1\le j\le k\), when attempting to commute \(C(w_{M - k + j}, \theta _{M - k + j})\) to the left of

$$\begin{aligned} \prod _{h \notin \sigma _{j + 1} ({\mathcal {I}})} D (x_h, r_h) \prod _{h = j + 1}^k D (w_{M - k + h}, \theta _{M - k + h}), \end{aligned}$$

we obtain

$$\begin{aligned} C \big ( x_{i_{\sigma (j)}}, r_{i_{\sigma (j)}} \big ) \prod _{h \notin \sigma _j ({\mathcal {I}})} D (x_h, r_h) \prod _{h = j}^k D (w_{M - k + h}, \theta _{M - k + h}). \end{aligned}$$

By Lemma A.10, this contributes a factor of

$$\begin{aligned} \frac{\big ( 1 - r^{-2}_{i_{\sigma (j)}} \big ) x_{i_{\sigma (j)}}}{x_{i_{\sigma (j)}} - w_{M - k + j}} \prod _{h = M - k + j + 1}^M \frac{\theta _h^{-2} w_h - x_{i_{\sigma (j)}}}{w_h - x_{i_{\sigma (j)}}} \prod _{h \notin \sigma _j ({\mathcal {I}})} \frac{r^{-2}_h x_h - x_{i_{\sigma (j)}}}{x_h - x_{i_{\sigma (j)}}}. \end{aligned}$$
(A.21)

This deals with the first case above when we swap the parameters between C and D operators.

In the second case when we do not swap the parameters, each \(C (w_j, \theta _j)\) for \(1\le j\le M-k\) must be commuted to the left of \(\prod _{h \notin {\mathcal {I}}} D (x_h, r_h) \prod _{h = M - k + 1}^M D (w_h, \theta _h)\), which contributes

$$\begin{aligned} \prod _{h \notin {\mathcal {I}}} \frac{r^{-2}_h x_h - w_j}{x_h - w_j} \prod _{h = M - k + 1}^M \frac{\theta _h^{-2} w_h - w_j}{w_h - w_j}. \end{aligned}$$
(A.22)

Observe that

$$\begin{aligned} \begin{aligned} \prod _{j = 1}^k \big ( 1 - r^{-2}_{i_{\sigma (j)}} \big ) x_{i_{\sigma (j)}}&= \prod _{i \in {\mathcal {I}}} (1 - r^{-2}_i) x_i;\\ \prod _{j = 1}^k \prod _{h \notin \sigma _j ({\mathcal {I}})} \frac{r^{-2}_h x_h - x_{i_{\sigma (j)}}}{x_h - x_{i_{\sigma (j)}}}&= \prod _{ i\in {\mathcal {I}},\, h \notin {\mathcal {I}}} \frac{r^{-2}_h x_h - x_i}{x_h - x_i} \prod _{1 \le h < j \le k} \frac{r^{-2}_{i_{\sigma (h)}} x_{i_{\sigma (h)}} - x_{i_{\sigma (j)}}}{x_{i_{\sigma (h)}} - x_{i_{\sigma (j)}}}, \end{aligned}\nonumber \\ \end{aligned}$$
(A.23)

Now, combining the product of (A.21) over \(1\le j\le k\) and (A.22) over \(1\le j\le M-k\), and using (A.23), we see that the desired coefficient depending on \(\sigma \in {\mathfrak {S}}_k\) is equal to

$$\begin{aligned} \begin{aligned}&\prod _{i \in {\mathcal {I}}} (1 - r^{-2}_i) x_i \prod _{i \in {\mathcal {I}},\, h \notin {\mathcal {I}}} \frac{r^{-2}_h x_h - x_i}{x_h - x_i} \prod _{j = 1}^{M - k} \Bigg ( \prod _{h \notin {\mathcal {I}}} \frac{r^{-2}_h x_h - w_j}{x_h - w_j} \prod _{h = M - k + 1}^M \frac{\theta ^{-2}_h w_h - w_j}{w_h - w_j} \Bigg )\\&\quad \times \prod _{1 \le h < j \le k} \frac{r^{-2}_{i_{\sigma (h)}} x_{i_{\sigma (h)}} - x_{i_{\sigma (j)}}}{x_{i_{\sigma (h)}} - x_{i_{\sigma (j)}}} \prod _{j = 1}^k \Bigg ( \frac{1}{x_{i_{\sigma (j)}} - w_{M - k + j}} \prod _{h = M - k + j + 1}^M \frac{\theta ^{-2}_h w_h - x_{i_{\sigma (j)}}}{w_h - x_{i_{\sigma (j)}}} \Bigg ). \end{aligned}\nonumber \\ \end{aligned}$$
(A.24)

Note that this is the coefficient of the operator

$$\begin{aligned}{} & {} C(x_{i_{\sigma (k)}},r_{i_{\sigma (k)}}) \ldots C(x_{i_{\sigma (1)}},r_{i_{\sigma (1)}}) C(w_{M-k},\theta _{M-k})\ldots C(w_1,\theta _1)\\{} & {} \qquad \prod _{j=M-k+1}^{M}D(w_j,\theta _j) \prod _{j\notin {\mathcal {I}}}D(x_j,r_j), \end{aligned}$$

and permuting the first k of the C operators to the desired order \(C(x_{i_k},r_{i_k})\ldots C(x_{i_1},r_{i_1}) \) results in an additional factor

$$\begin{aligned} \prod _{\begin{array}{c} 1\le \upalpha<\upbeta \le k\\ \sigma (\upbeta )<\sigma (\upalpha ) \end{array}} \frac{r^{-2}_{i_{\sigma (\upbeta )}}x_{i_{\sigma (\upbeta )}}-x_{i_{\sigma (\upalpha )}}}{r^{-2}_{i_{\sigma (\upalpha )}}x_{i_{\sigma (\upalpha )}}-x_{i_{\sigma (\upbeta )}}}, \end{aligned}$$
(A.25)

by Lemma A.6.

This implies that the full coefficient \(R_{{\mathcal {I}};{\mathcal {H}}}({\textbf{w}};{\textbf{x}};\varvec{\uptheta };{\textbf{r}})\) equals to the sum of (A.24) times (A.25) over all \(\sigma \in {\mathfrak {S}}_k\). We have

$$\begin{aligned}{} & {} \prod _{1 \le h< j \le k} \frac{r^{-2}_{i_{\sigma (h)}} x_{i_{\sigma (h)}} - x_{i_{\sigma (j)}}}{x_{i_{\sigma (h)}} - x_{i_{\sigma (j)}}} \prod _{\begin{array}{c} 1\le \upalpha<\upbeta \le k\\ \sigma (\upbeta )<\sigma (\upalpha ) \end{array}} \frac{r^{-2}_{i_{\sigma (\upbeta )}}x_{i_{\sigma (\upbeta )}}-x_{i_{\sigma (\upalpha )}}}{r^{-2}_{i_{\sigma (\upalpha )}}x_{i_{\sigma (\upalpha )}}-x_{i_{\sigma (\upbeta )}}}\\{} & {} \qquad = {\textrm{sgn}}(\sigma ) \prod _{i,j\in {\mathcal {I}},\, i<j}\frac{r_i^{-2}x_i-x_j}{x_i-x_j}. \end{aligned}$$

Therefore, the summation over \(\sigma \) amounts to computing the determinant:

$$\begin{aligned} \begin{aligned}&\sum _{\sigma \in {\mathcal {I}}} \mathop {\textrm{sgn}}(\sigma ) \prod _{j = 1}^k \Bigg ( \frac{1}{x_{i_{\sigma (j)}} - w_{M - k + j}} \prod _{h = M - k + j + 1}^M \frac{\theta ^{-2}_h w_h - x_{i_{\sigma (j)}}}{w_h - x_{i_{\sigma (j)}}} \Bigg ) \\&\quad = \det \left[ \frac{1}{x_{i_{\upbeta }} - w_{M - k + \upalpha }} \prod _{h = M - k + \upalpha + 1}^M \frac{\theta ^{-2}_h w_h - x_{i_{\upbeta }}}{w_h - x_{i_{\upbeta }}} \right] _{\upalpha ,\upbeta =1}^{k}. \end{aligned}\nonumber \\ \end{aligned}$$
(A.26)

We have already computed this determinant (up to renaming the variables) in (3.9), and so

$$\begin{aligned} (A.26)= \prod _{i\in {\mathcal {I}},\, j\notin {\mathcal {H}}} \frac{1}{x_i-w_j} \prod _{i,j\notin {\mathcal {H}},\,i<j}(\theta _j^{-2}w_j-w_i) \prod _{i,j\in {\mathcal {I}},\,i<j}(x_i-x_j), \end{aligned}$$

where we recalled that \({\mathcal {H}}=\left\{ 1,2,\ldots ,M-k \right\} \). Combining this with the remainder of (A.24), we arrive at the desired expression (A.20), thus concluding the proof of Lemma A.11. \(\square \)

Finally, to get \(R_{{\mathcal {I}};{\mathcal {H}}}\) for general \({\mathcal {H}}\), we can permute the C operators in the left-hand side of (A.17) thanks to (2.11). More precisely, the two expressions

$$\begin{aligned} \begin{aligned}&C(w_M,\theta _M)\ldots C(w_1,\theta _1)\prod _{1\le i<j\le M}\frac{1}{\theta _j^{-2}w_j-w_i},\\&C(w_{h_{M-k}},\theta _{h_{M-k}})\ldots C(w_{h_1},\theta _{h_1}) \prod _{i,j\in {\mathcal {H}},\,i<j}\frac{1}{\theta _j^{-2}w_j-w_i} \end{aligned}\nonumber \\ \end{aligned}$$

are symmetric in \((w_i,\theta _i)\), \(1\le i\le M\), and \((w_h,\theta _h)\), \(h\in {\mathcal {H}}\), respectively. Defining

$$\begin{aligned} {\widehat{R}}_{{\mathcal {I}};{\mathcal {H}}}({\textbf{w}};{\textbf{x}};\varvec{\uptheta };{\textbf{r}}) = R_{{\mathcal {I}};{\mathcal {H}}}({\textbf{w}};{\textbf{x}};\varvec{\uptheta };{\textbf{r}})\, \frac{\prod _{i,j\in {\mathcal {H}},\,i<j}(\theta _j^{-2}w_j-w_i)}{\prod _{1\le i<j\le M}(\theta _j^{-2}w_j-w_i)}, \end{aligned}$$
(A.27)

we see that for any permutation \(\tau \in {\mathfrak {S}}_M\) we have \( {\widehat{R}}_{{\mathcal {I}};\tau ({\mathcal {H}})}(\tau ({\textbf{w}});{\textbf{x}};\tau (\varvec{\uptheta });{\textbf{r}}) = {\widehat{R}}_{{\mathcal {I}};{\mathcal {H}}}({\textbf{w}};{\textbf{x}};\varvec{\uptheta };{\textbf{r}}) \). The renormalization in (A.27) cancels out with the two last factors in \(R_{{\mathcal {I}};\left\{ 1,\ldots ,M-k \right\} }\) in (A.20). This together with the symmetry of (A.27) implies that \(R_{{\mathcal {I}};{\mathcal {H}}}\) for general \({\mathcal {H}}\) is given by the same formula. We have thus completed the proof of Proposition A.9.

1.2.4 Action of C operators on a two-fold tensor product

In this subsection we perform computations with row operators acting on tensor products which are parallel to those in Appendices A.1.2 and A.2.2, but now involve the C operators.

Lemma A.12

Let \({\textbf{x}}=(x_1,\ldots ,x_M )\), \({\textbf{r}}=(r_1,\ldots ,r_M )\). On any tensor product \(V_1\otimes V_2\) we have:

$$\begin{aligned} \begin{aligned}&C(x_M,r_M)\ldots C(x_1,r_1)\\&\quad = \sum _{{\mathcal {I}}\subseteq \{1,\ldots ,M \}} C(x_{i_k},r_{i_k})\ldots C(x_{i_1},r_{i_1}) A(x_{j_{M-k}},r_{j_{M-k}})\ldots A(x_{j_1},r_{j_1})\\&\qquad \otimes C(x_{j_{M-k}},r_{j_{M-k}})\ldots C(x_{j_1},r_{j_1}) D(x_{i_k},r_{i_k})\ldots D(x_{i_1},r_{i_1})\\&\qquad \times \prod _{i\in {\mathcal {I}},\, j\in {\mathcal {J}}}\frac{1}{x_i-x_j} \prod _{1\le i<j\le M}(r_j^{-2}x_j-x_i) \\&\qquad \prod _{i,j\in {\mathcal {I}},\,i<j} \frac{1}{r_j^{-2}x_j-x_i} \prod _{i,j\in {\mathcal {J}},\,i<j} \frac{1}{r_j^{-2}x_j-x_i}, \end{aligned}\nonumber \\ \end{aligned}$$
(A.28)

where \({\mathcal {I}}=(i_1<\ldots <i_k )\) and \({\mathcal {J}}=\left\{ 1,\ldots ,M \right\} {\setminus } {\mathcal {I}}= (j_1<\ldots <j_{M-k} )\).

Proof

In the proof we use the shorthand notation for the operators from Remark A.5. Due to (2.9), relations in Proposition 2.4, and an argument identical to the beginning of the proof of Lemma A.7, we see that the left-hand side of (A.28) can be written in the form

$$\begin{aligned} \sum _{{\mathcal {I}}\subseteq \left\{ 1,\ldots ,M \right\} } h_{{\mathcal {I}}}({\textbf{x}};{\textbf{r}}) C_{i_k}\ldots C_{i_1}A_{j_{M-k}}\ldots A_{j_1} \otimes C_{j_{M-k}}\ldots C_{j_1} D_{i_k}\ldots D_{i_{1}}, \end{aligned}$$

where the notation \({\mathcal {I}},{\mathcal {J}}\) is as in (A.28).

We first evaluate \(h_{{\mathcal {I}}}\) in the special case \({\mathcal {I}}={\mathcal {I}}_k=\left\{ M-k+1,\ldots ,M-1,M \right\} \). The contribution containing the operator \(C_M\ldots C_{M-k+1}A_{M-k}\ldots A_1\otimes C_{M-k}\ldots C_1 D_{M}\ldots D_{M-k+1}\) may arise only if we use (2.16) in the second tensor factor to commute all \(C_j\), \(j\notin {\mathcal {I}}_k\), to the left of all \(D_i\), \(i\in {\mathcal {I}}\), without swapping their arguments. Each such commutation gives rise to the factor \(\frac{r_i^{-2}x_i-x_j}{x_i-x_j}\). Therefore,

$$\begin{aligned} \begin{aligned}&h_{{\mathcal {I}}_k}({\textbf{x}};{\textbf{r}}) = \prod _{i=M-k+1}^{M}\prod _{j=1}^{M-k}\frac{r_i^{-2}x_i-x_j}{x_i-x_j}\\&\quad = \prod _{i\in {\mathcal {I}}_k,\, j\notin {\mathcal {I}}_k}\frac{1}{x_i-x_j} \prod _{1\le i<j\le M}(r_j^{-2}x_j-x_i)\\&\qquad \prod _{i,j\in {\mathcal {I}}_k,\,i<j} \frac{1}{r_j^{-2}x_j-x_i} \prod _{i,j\notin {\mathcal {I}}_k,\,i<j} \frac{1}{r_j^{-2}x_j-x_i}. \end{aligned}\nonumber \\ \end{aligned}$$
(A.29)

Next, thanks to (2.11) the three expressions

$$\begin{aligned} \frac{C_M \ldots C_1}{\prod _{1\le i<j\le M}(r_j^{-2}x_j-x_i)}, \qquad \frac{C_{i_k} \ldots C_{i_1}}{\prod _{i,j\in {\mathcal {I}},\,i<j}(r_j^{-2}x_j-x_i)}, \qquad \frac{C_{j_{M-k}} \ldots C_{j_1}}{\prod _{i,j\notin {\mathcal {I}},\,i<j}(r_j^{-2}x_j-x_i)} \end{aligned}$$

are symmetric in the pairs \((x_i,r_i)\) of variables they depend on (where \(1\le i\le M\), \(i\in {\mathcal {I}}\), and \(i\notin {\mathcal {I}}\), respectively). Therefore, the function

$$\begin{aligned} \widehat{h}_{{\mathcal {I}}}({\textbf{x}};{\textbf{r}})= h_{{\mathcal {I}}}({\textbf{x}};{\textbf{r}})\, \frac{ \prod _{i,j\in {\mathcal {I}},\, i<j}(r_j^{-2}x_j-x_i) \prod _{i,j\notin {\mathcal {I}},\, i<j}(r_j^{-2}x_j-x_i) }{ \prod _{1\le i<j\le M}(r_j^{-2}x_j-x_i) } \end{aligned}$$

satisfies \({{\widehat{h}}}_{\tau ({\mathcal {I}})}({\textbf{x}};{\textbf{r}}) ={{\widehat{h}}}_{{\mathcal {I}}}(\tau ^{-1}({\textbf{x}});\tau ^{-1}({\textbf{r}}))\) for any permutation \(\tau \in {\mathfrak {S}}_M\). Together with (A.29) this shows that for any \({\mathcal {I}}\) we have \({{\widehat{h}}}_{{\mathcal {I}}}({\textbf{x}};{\textbf{r}})=\prod _{i\in {\mathcal {I}},\,j\notin {\mathcal {I}}}(x_i-x_j)^{-1}\), which implies the claim. \(\square \)

In the next proposition, let \(e_0 = e_0^{(i_1)} \otimes e_0^{(i_2)} \otimes \cdots \otimes e_0^{(i_n)} \in V^{(i_1)} \otimes V^{(i_2)} \otimes \cdots \otimes V^{(i_n)}\) for any integers \(i_1< i_2< \cdots < i_n\). Moreover, fix \(M \ge 1, N \ge 0\), and \({\mathcal {T}} = (t_1< t_2< \ldots < t_M) \subset {\mathbb {Z}}_{\ge 1}\). Define the vector \(e_{{\mathcal {T}}; N} = e_{m_1}^{(N + 1)} \otimes e_{m_2}^{(N + 2)} \otimes \cdots \in V^{(N + 1)} \otimes V^{(N + 2)} \otimes \cdots \), where \(m_i=1\) if \(i\in {\mathcal {T}}\), and 0 otherwise.

Proposition A.13

With the above notation we have

$$\begin{aligned} \begin{aligned}&\bigl \langle e_{{\mathcal {T}}; N}, C(x_M,r_M) \cdots C(x_1,r_1)\, e_0 \bigr \rangle = \prod _{1 \le i< j \le M} \frac{r_j^{-2} x_j - x_i}{x_i - x_j} \\&\quad \times \sum _{\sigma \in {\mathfrak {S}}_M} \mathop {\textrm{sgn}}(\sigma ) \prod _{j = 1}^M \Bigg ( \frac{y_{t_j + N} \big ( 1 - s_{t_j + N}^2 \big )}{y_{t_j + N} - s_{t_j + N}^2 x_{\sigma (j)}} \prod _{i = N + 1}^{t_j + N - 1} \frac{s_i^2 \big (y_i - x_{\sigma (j)} \big )}{y_i - s_i^2 x_{\sigma (j)}} \Bigg ), \end{aligned}\nonumber \\ \end{aligned}$$

where the inner product is taken in the space \(V^{(N+1)}\otimes V^{(N+2)}\otimes \ldots \).

Observe that this formula is determinantal, and is in fact equivalent to the determinantal formula for \(F_\lambda \) from Theorem 3.9 proven in Appendix A.1, up to swapping horizontal arrows with empty horizontal edges, and renormalizing. Here, however, we present an independent proof which is more convenient given our previous statements.

Proof of Proposition A.13

In the proof we use the shorthand notation for the operators from Remark A.5. Fix \(n>N\) and vectors \(v_1,v_2\in V^{(n+1)}\otimes V^{(n+2)}\otimes \ldots \). By Lemma A.12, we have

$$\begin{aligned} \begin{aligned} \big \langle e_0^{(n)} \otimes v_2, C_M C_{M - 1} \cdots C_1 e_0 \big \rangle&= \big \langle e_0^{(n)}, A_M A_{M - 1} \cdots A_1 e_0^{(n)} \big \rangle \langle v_2, C_M C_{M - 1} \cdots C_1 e_0 \rangle ;\\ \big \langle e_1^{(n)} \otimes v_2, C_M C_{M - 1} \cdots C_1 e_0 \big \rangle&= \sum _{i = 1}^M \big \langle e_1^{(n)}, C_i A_M \cdots A_{i + 1} A_{i - 1} \cdots A_1 e_0^{(n)} \big \rangle \\&\qquad \times \langle v_2, C_M \cdots C_{i + 1} C_{i - 1} \cdots C_1 D_i e_0 \rangle \\&\qquad \times \prod _{j \ne i} \frac{1}{x_i - x_j} \prod _{j = 1}^{i - 1} (r^{-2}_i x_i - x_j) \prod _{j = i + 1}^M (r^{-2}_j x_j - x_i). \end{aligned}\nonumber \\ \end{aligned}$$
(A.30)

These quantities can be computed as follows:

$$\begin{aligned} \begin{aligned} D_ie_0&=e_0; \\ \big \langle e_0^{(n)}, A_M A_{M - 1} \cdots A_1 e_0^{(n)} \big \rangle&= \prod _{j = 1}^M \frac{s_n^2 (y_n - x_j)}{y_n - s_n^2 x_j};\\ \big \langle e_1^{(n)}, C_i A_M \cdots A_{i + 1} A_{i - 1} \cdots A_1 e_0^{(n)} \big \rangle&= \frac{y_n (1 - s_n^2)}{y_n - s_n^2 x_i} \prod _{j \ne i} \frac{s_n^2 (y_n - x_j)}{y_n - s_n^2 x_j}, \end{aligned}\nonumber \\ \end{aligned}$$

using the definition of the operators (2.8) and formulas for the vertex weights W (2.3). Therefore, (A.30) is continued as

$$\begin{aligned} \begin{aligned}&\big \langle e_0^{(n)} \otimes v_2, C_M C_{M - 1} \cdots C_1 e_0 \big \rangle = \langle v_2, C_M C_{M - 1} \cdots C_1 e_0 \rangle \prod _{j = 1}^M \frac{s_n^2 (y_n - x_j)}{y_n - s_n^2 x_j};\\&\big \langle e_1^{(n)} \otimes v_2, C_M C_{M - 1} \cdots C_1 e_0 \big \rangle = \sum _{i = 1}^M \langle v_2, C_M \cdots C_{i + 1} C_{i - 1} \cdots C_1 e_0 \rangle \\&\quad \times \frac{y_n (1 - s_n^2)}{y_n - s_n^2 x_i} \prod _{j \ne i} \frac{s_n^2 (y_n - x_j)}{y_n - s_n^2 x_j} \prod _{j \ne i} \frac{1}{x_i - x_j} \prod _{j = 1}^{i - 1} (r^{-2}_i x_i - x_j) \prod _{j = i + 1}^M (r^{-2}_j x_j - x_i). \end{aligned}\nonumber \\ \end{aligned}$$
(A.31)

Now we can evaluate \(\langle e_{{\mathcal {T}};N},C_M \ldots C_1 e_0 \rangle \) by repeatedly applying (A.31). Throughout these applications, we use first or second identity in (A.31), respectively, for each n belonging or not belonging to the set \(\left\{ t_1+N,t_2+N,\ldots ,t_M+N \right\} \). In the latter case, for \(n=N+t_j\), we choose which index \(i=i_j\in \left\{ 1,\ldots ,M \right\} \) to remove. These choices are encoded by a permutation \(\sigma \in {\mathfrak {S}}_M\) as \(i_j=\sigma (j)\). This leads to the desired claim, where, in particular, \(\mathop {\textrm{sgn}}(\sigma )\) arises from reordering the denominators \(x_{\sigma (i)}-x_{\sigma (j)}\) to \(x_i-x_j\) over all \(1\le i<j\le M\). \(\square \)

1.2.5 Completing the proof

To finalize the proof of Theorem 3.10, let us recall the formula to be established. Fix an arbitrary signature \(\lambda =(\lambda _1\ge \ldots \ge \lambda _N \ge 0)\). Let \(d = d(\lambda ) \ge 0\) denote the integer such that \(\lambda _d \ge d\) and \(\lambda _{d + 1} < d + 1\). Denote by \(\ell _j=\lambda _j+N-j+1\), \(j=1,\ldots ,N \), the elements of the set \({\mathcal {S}}(\lambda )\). Moreover, we define \(\mu = (\mu _1< \mu _2< \ldots < \mu _d) = \{1,\ldots ,N \} {\setminus } \big ( {\mathcal {S}}(\lambda ) \cap \{1,\ldots ,N \} \big )\). Our goal is to show that

$$\begin{aligned}&G_{\lambda } ({\textbf{x}}; {\textbf{y}}; {\textbf{r}}; {\textbf{s}}) = \prod _{j=1}^{M}\prod _{k=1}^{N} \frac{y_k-s_k^2r_j^{-2}x_j}{y_k-s_k^2 x_j}\nonumber \\&\quad \sum _{\begin{array}{c} {\mathcal {I}},{\mathcal {J}}\subseteq \left\{ 1,\ldots ,M \right\} \\ |{\mathcal {I}}|=|{\mathcal {J}}|=d \end{array}} \prod _{\begin{array}{c} i\in {\mathcal {I}}\\ 1\le j\le M \end{array}}(r_i^{-2}x_i-x_j) \prod _{\begin{array}{c} i\in {\mathcal {I}}\\ j\in {\mathcal {I}}^c \end{array}} \frac{1}{r_i^{-2}x_i-r_j^{-2}x_j}\nonumber \\&\quad \times \prod _{\begin{array}{c} i,j\in {\mathcal {I}}\\ i<j \end{array}}\frac{1}{r_i^{-2}x_i-r_j^{-2}x_j} \prod _{\begin{array}{c} i\in {\mathcal {I}}^c\\ j\in {\mathcal {J}} \end{array}}(r_i^{-2}x_i-x_j) \prod _{\begin{array}{c} i\in {\mathcal {J}}^c\\ j\in {\mathcal {J}} \end{array}} \frac{1}{x_i-x_j} \prod _{\begin{array}{c} i,j\in {\mathcal {J}}\\ i<j \end{array}}\frac{1}{x_j-x_i}\nonumber \\&\quad \times \sum _{\sigma ,\rho \in {\mathfrak {S}}_d} \mathop {\textrm{sgn}}(\sigma \rho ) \prod _{h = 1}^d \biggl ( \frac{y_{\ell _h} \big ( 1 - s_{\ell _h}^2 \big )}{y_{\ell _h} - s_{\ell _h}^2 x_{j_{\rho (h)}}} \prod _{i = N + 1}^{\ell _h - 1} \frac{s_i^2 \big (y_i - x_{j_{\rho (h)}} \big )}{y_i - s_i^2 x_{j_{\rho (h)}}} \biggr )\nonumber \\&\quad \times \prod _{m=1}^d \biggl ( \frac{s_{\mu _m}^2 }{y_{\mu _m} - s_{\mu _m}^2 r^{-2}_{i_{\sigma (m)}} x_{i_{\sigma (m)}}} \prod _{k = \mu _m + 1}^N \frac{s_k^2 \big (r^{-2}_{i_{\sigma (m)}} x_{i_{\sigma (m)}} - y_k \big )}{y_k - s_k^2 r^{-2}_{i_{\sigma (m)}} x_{i_{\sigma (m)}}} \biggr ).\nonumber \\ \end{aligned}$$
(A.32)

where \({\mathcal {I}}= (i_1< i_2< \ldots < i_d)\) and \({\mathcal {J}}= (j_1< j_2< \ldots < j_d)\).

Recall that

$$\begin{aligned} G_{\lambda }({\textbf{x}};{\textbf{y}};{\textbf{r}};{\textbf{s}})= \left\langle e_{{\mathcal {S}}_N(\lambda )}\otimes e_{{\mathcal {S}}_{>N}}(\lambda ), D(x_M,r_M)\ldots D(x_2,r_2)D(x_1,r_1)(e_{[1,N]}\otimes e_0) \right\rangle , \end{aligned}$$

where we have split the vectors into \(e_{{\mathcal {S}}_N(\lambda )},e_{[1,N]}\in V^{(1)}\otimes \ldots \otimes V^{(N)}\) (cf. (A.12)), and the remaining two vectors belong to \(V^{(N+1)}\otimes V^{(N+2)}\otimes \ldots \). Note that the vector \(e_{{\mathcal {S}}_{>N}(\lambda )}\) has exactly d tensor components of the form \(e_1^{(k)}\), and the other components are of the form \(e_0^{(k)}\). We can use Proposition A.8 to write:

$$\begin{aligned} \begin{aligned}&\left\langle e_{{\mathcal {S}}_N(\lambda )}\otimes e_{{\mathcal {S}}_{>N}(\lambda )}, D(x_M,r_M)\ldots D(x_2,r_2)D(x_1,r_1) (e_{[1,N]}\otimes e_0)\right\rangle \\&\quad = \prod _{j=1}^{M}\prod _{k=1}^{N} \frac{y_k-s_k^2r_j^{-2}x_j}{y_k-s_k^2 x_j} \sum _{\begin{array}{c} {\mathcal {I}}\subseteq \left\{ 1,\ldots ,M \right\} \\ |{\mathcal {I}}|=d \end{array}}\\&\qquad \left\langle e_{{\mathcal {S}}_{>N}(\lambda )}, \biggl ( \prod _{j\notin {\mathcal {I}}}D(x_j,r_j) \biggr ) C(x_{i_d},r_{i_d}) \ldots C(x_{i_1},r_{i_1}) e_0 \right\rangle \\&\qquad \times \prod _{i \in {\mathcal {I}},\, j\notin {\mathcal {I}}} \frac{r_i^{-2}x_i-x_j}{r_i^{-2}x_i-r_j^{-2}x_j} \prod _{i,j \in {\mathcal {I}},\,i<j} \frac{r_i^{-2}x_i-x_j}{r_i^{-2}x_i-r_j^{-2}x_j}\\&\qquad \times \sum _{\sigma \in {\mathfrak {S}}_d} \mathop {\textrm{sgn}}(\sigma ) \prod _{j=1}^d \biggl ( \frac{s_{\mu _j}^2 x_{i_{\sigma (j)}} \big ( r^{-2}_{i_{\sigma (j)}} - 1\big )}{y_{\mu _j} - s_{\mu _j}^2 r^{-2}_{i_{\sigma (j)}} x_{i_{\sigma (j)}}} \prod _{k = \mu _j + 1}^N \frac{s_k^2 \big (r^{-2}_{i_{\sigma (j)}} x_{i_{\sigma (j)}} - y_k \big )}{y_k - s_k^2 r^{-2}_{i_{\sigma (j)}} x_{i_{\sigma (j)}}} \biggr ). \end{aligned}\nonumber \\ \end{aligned}$$
(A.33)

Let us denote

$$\begin{aligned} D_{{\mathcal {I}}^c}:=\prod _{i\notin {\mathcal {I}}}D(x_j,r_j), \qquad C_{{\mathcal {I}}}:= C(x_{i_d},r_{i_d}) \ldots C(x_{i_1},r_{i_1}), \end{aligned}$$

and use similar notation in what follows. In particular, in all such products of the C operators the indices are decreasing from left to right. Employ Proposition A.9 to write

$$\begin{aligned}&D_{{\mathcal {I}}^c}C_{{\mathcal {I}}}= \sum _{\begin{array}{c} {\mathcal {K}}\subseteq {\mathcal {I}}^c,\, {\mathcal {H}}\subseteq {\mathcal {I}}\\ |{\mathcal {K}}|+|{\mathcal {H}}|=d \end{array}} C_{{\mathcal {K}}}C_{{\mathcal {H}}}D_{{\mathcal {I}}\setminus {\mathcal {H}}}D_{{\mathcal {I}}^c \setminus {\mathcal {K}}} \\&\quad \times \prod _{k\in {\mathcal {K}}}(1-r_k^{-2})x_k \prod _{i\in {\mathcal {K}}\cup {\mathcal {H}},\, j\in {\mathcal {I}}^c\setminus {\mathcal {K}}}\frac{r_j^{-2}x_j-x_i}{x_j-x_i} \prod _{h\in {\mathcal {H}},\, j\in {\mathcal {I}}\setminus {\mathcal {H}}}\frac{1}{x_j-x_h} \prod _{i\in {\mathcal {K}},\, j\in {\mathcal {I}}\setminus {\mathcal {H}}}\frac{1}{x_i-x_j}\\&\quad \times \prod _{i,j\in {\mathcal {K}},\, i<j}(r_i^{-2}x_i-x_j) \prod _{i,h\in {\mathcal {H}},\, h<i}\frac{1}{r_i^{-2}x_i-x_h} \prod _{i,j\in {\mathcal {I}},\, i<j}(r_j^{-2}x_j-x_i). \end{aligned}$$

Let us insert this into (A.33). Observe that all operators D preserve the vector \(e_0\). Thus, we can continue the computation as

$$\begin{aligned}&\left\langle e_{{\mathcal {S}}_N(\lambda )}\otimes e_{{\mathcal {S}}_{>N}(\lambda )}, D(x_M,r_M)\ldots D(x_2,r_2)D(x_1,r_1) (e_{[1,N]}\otimes e_0)\right\rangle \\&\quad = \prod _{j=1}^{M}\prod _{k=1}^{N} \frac{y_k-s_k^2r_j^{-2}x_j}{y_k-s_k^2 x_j} \sum _{\begin{array}{c} {\mathcal {I}}\subseteq \left\{ 1,\ldots ,M \right\} \\ |{\mathcal {I}}|=d \end{array}} \prod _{i,j\in {\mathcal {I}}}(r_i^{-2}x_i-x_j) \\&\qquad \prod _{\begin{array}{c} i \in {\mathcal {I}}\\ j\in {\mathcal {I}}^c \end{array}} \frac{r_i^{-2}x_i-x_j}{r_i^{-2}x_i-r_j^{-2}x_j} \prod _{\begin{array}{c} i,j \in {\mathcal {I}}\\ i<j \end{array}} \frac{1}{r_i^{-2}x_i-r_j^{-2}x_j} \\&\qquad \times \sum _{\begin{array}{c} {\mathcal {K}}\subseteq {\mathcal {I}}^c,\, {\mathcal {H}}\subseteq {\mathcal {I}}\\ |{\mathcal {K}}|+|{\mathcal {H}}|=d \end{array}} \left\langle e_{{\mathcal {S}}_{>N}(\lambda )}, C_{\mathcal {K}}C_{\mathcal {H}} e_0 \right\rangle \prod _{k\in {\mathcal {K}}}(1-r_k^{-2})x_k \prod _{i\in {\mathcal {K}}\cup {\mathcal {H}},\, j\in {\mathcal {I}}^c\setminus {\mathcal {K}}}\frac{r_j^{-2}x_j-x_i}{x_j-x_i}\\&\qquad \times \prod _{h\in {\mathcal {H}},\, j\in {\mathcal {I}}\setminus {\mathcal {H}}}\frac{1}{x_j-x_h} \prod _{i\in {\mathcal {K}},\, j\in {\mathcal {I}}\setminus {\mathcal {H}}}\frac{1}{x_i-x_j} \\&\qquad \prod _{i,j\in {\mathcal {K}},\, i<j}(r_i^{-2}x_i-x_j) \prod _{i,h\in {\mathcal {H}},\, h<i}\frac{1}{r_i^{-2}x_i-x_h}\\&\qquad \times \sum _{\sigma \in {\mathfrak {S}}_d} \mathop {\textrm{sgn}}(\sigma ) \prod _{j=1}^d \biggl ( \frac{s_{\mu _j}^2 }{y_{\mu _j} - s_{\mu _j}^2 r^{-2}_{i_{\sigma (j)}} x_{i_{\sigma (j)}}} \prod _{k = \mu _j + 1}^N \frac{s_k^2 \big (r^{-2}_{i_{\sigma (j)}} x_{i_{\sigma (j)}} - y_k \big )}{y_k - s_k^2 r^{-2}_{i_{\sigma (j)}} x_{i_{\sigma (j)}}} \biggr ). \end{aligned}$$

Now we are going to apply Proposition A.13 to compute the remaining inner product. Recall that \(e_{{\mathcal {S}}_{>N}(\lambda )}\) has exactly d tensor components equal to \(e_1^{(m)}\), for \(m \in \left\{ \ell _1,\ldots ,\ell _d \right\} \). Denote \((x_1',\ldots ,x_d' )=(x_{h_1},\ldots ,x_{h_{|{\mathcal {H}}|}}, x_{k_1},\ldots ,x_{k_{|{\mathcal {K}}|}} )\), where \(h_1<\ldots <h_{|{\mathcal {H}}|} \), \(k_1<\ldots <k_{|{\mathcal {K}}|} \). Then we have

$$\begin{aligned} \begin{aligned}&\left\langle e_{{\mathcal {S}}_{>N}(\lambda )}, C_{\mathcal {K}}C_{\mathcal {H}} e_0 \right\rangle = (-1)^{\frac{d(d-1)}{2}} \prod _{i,j\in {\mathcal {H}},\, i<j} \frac{r_j^{-2} x_j - x_i}{x_i - x_j} \prod _{i,j\in {\mathcal {K}},\, i<j} \frac{r_j^{-2} x_j - x_i}{x_i - x_j}\\&\quad \times \prod _{i\in {\mathcal {H}},\,j\in {\mathcal {K}}} \frac{r_j^{-2} x_j - x_i}{x_i - x_j} \sum _{\rho \in {\mathfrak {S}}_d} \mathop {\textrm{sgn}}(\rho ) \prod _{j = 1}^d \biggl ( \frac{y_{\ell _j} \big ( 1 - s_{\ell _j}^2 \big )}{y_{\ell _j} - s_{\ell _j}^2 x'_{\rho (j)}} \prod _{i = N + 1}^{\ell _j - 1} \frac{s_i^2 \big (y_i - x'_{\rho (j)} \big )}{y_i - s_i^2 x'_{\rho (j)}} \biggr ). \end{aligned}\nonumber \\ \end{aligned}$$
(A.34)

The sign \((-1)^{\frac{d(d-1)}{2}}\) arises from the fact that the \(t_j\)’s in Proposition A.13 are increasing, while the \(\ell _j\)’s in (A.34) are decreasing, so the sign of \(\rho \) has to be multiplied by \((-1)^{\frac{d(d-1)}{2}}\). This allows to continue our computation as follows:

$$\begin{aligned}&\left\langle e_{{\mathcal {S}}_N(\lambda )}\otimes e_{{\mathcal {S}}_{>N}(\lambda )}, D(x_M,r_M)\ldots D(x_2,r_2)D(x_1,r_1) (e_{[1,N]}\otimes e_0)\right\rangle \\&\quad = (-1)^{\frac{d(d-1)}{2}} \prod _{j=1}^{M}\prod _{k=1}^{N} \frac{y_k-s_k^2r_j^{-2}x_j}{y_k-s_k^2 x_j}\\&\qquad \times \sum _{\begin{array}{c} {\mathcal {I}}\subseteq \left\{ 1,\ldots ,M \right\} \\ |{\mathcal {I}}|=d \end{array}} \prod _{\begin{array}{c} i\in {\mathcal {I}}\\ 1\le j\le M \end{array}}(r_i^{-2}x_i-x_j) \prod _{\begin{array}{c} i\in {\mathcal {I}}\\ j\in {\mathcal {I}}^c \end{array}} \frac{1}{r_i^{-2}x_i-r_j^{-2}x_j} \prod _{\begin{array}{c} i,j\in {\mathcal {I}}\\ i<j \end{array}}\frac{1}{r_i^{-2}x_i-r_j^{-2}x_j}\\&\qquad \times \sum _{\begin{array}{c} {\mathcal {K}}\subseteq {\mathcal {I}}^c,\, {\mathcal {H}}\subseteq {\mathcal {I}}\\ |{\mathcal {K}}|+|{\mathcal {H}}|=d \end{array}} \prod _{\begin{array}{c} i\in {\mathcal {I}}^c\\ j\in {\mathcal {K}}\cup {\mathcal {H}} \end{array}}(r_i^{-2}x_i-x_j) \prod _{\begin{array}{c} i\notin {\mathcal {K}}\cup {\mathcal {H}}\\ j\in {\mathcal {K}}\cup {\mathcal {H}} \end{array}} \frac{1}{x_i-x_j} \prod _{\begin{array}{c} i,j\in {\mathcal {H}}\\ i<j \end{array}}\frac{1}{x_i-x_j}\\&\qquad \prod _{\begin{array}{c} i,j\in {\mathcal {K}}\\ i<j \end{array}}\frac{1}{x_i-x_j} \prod _{\begin{array}{c} i\in {\mathcal {H}}\\ j\in {\mathcal {K}} \end{array}}\frac{1}{x_i-x_j}\\&\qquad \times \sum _{\sigma ,\rho \in {\mathfrak {S}}_d} \mathop {\textrm{sgn}}(\sigma \rho ) \prod _{j = 1}^d \biggl ( \frac{y_{\ell _j} \big ( 1 - s_{\ell _j}^2 \big )}{y_{\ell _j} - s_{\ell _j}^2 x'_{\rho (j)}} \prod _{i = N + 1}^{\ell _j - 1} \frac{s_i^2 \big (y_i - x'_{\rho (j)} \big )}{y_i - s_i^2 x'_{\rho (j)}} \biggr )\\&\qquad \times \prod _{j=1}^d \biggl ( \frac{s_{\mu _j}^2 }{y_{\mu _j} - s_{\mu _j}^2 r^{-2}_{i_{\sigma (j)}} x_{i_{\sigma (j)}}} \prod _{k = \mu _j + 1}^N \frac{s_k^2 \big (r^{-2}_{i_{\sigma (j)}} x_{i_{\sigma (j)}} - y_k \big )}{y_k - s_k^2 r^{-2}_{i_{\sigma (j)}} x_{i_{\sigma (j)}}} \biggr ). \end{aligned}$$

Upon denoting \({\mathcal {J}}={\mathcal {K}}\cup {\mathcal {H}}=(j_1<\ldots <j_d )\), we arrive at the desired statement (A.32). Note that reordering the indices in \((x_1',\ldots ,x_d' )\) in the increasing order leads to an extra ± sign coming from \(\mathop {\textrm{sgn}}(\rho )\), but this sign is compensated by writing

$$\begin{aligned} \prod _{i,j\in {\mathcal {H}},\,i<j}\frac{1}{x_i-x_j} \prod _{i,j\in {\mathcal {K}},\,i<j}\frac{1}{x_i-x_j} \prod _{i\in {\mathcal {H}},\,j\in {\mathcal {K}}}\frac{1}{x_i-x_j} = \pm \prod _{i,j\in {\mathcal {J}},\,i<j}\frac{1}{x_i-x_j}\nonumber \\ \end{aligned}$$
(A.35)

(equivalently, one may refer to the symmetry as in the proof of Lemma A.12). Finally, replacing \(x_i-x_j\) in (A.35) with \(x_j-x_i\) absorbs the sign \((-1)^{\frac{d(d-1)}{2}}\). This completes the proof of Theorem 3.10.

Correlation kernel via Eynard–Mehta approach

Here we prove Theorem 6.7 on the determinantal structure of the FG measures and processes. We employ an Eynard–Mehta type approach based on [20], see also [31].

1.1 Representation of the ascending FG process in a determinantal form

Recall the notation of the ascending FG process (6.8) from Sect. 6.2. Throughout Appendix B we omit the notation \({\textbf{y}},{\textbf{s}}\) in the functions \(G_{\mu /\varkappa }(w_i;{\textbf{y}};\theta _i;{\textbf{s}})\) and other similar quantities.

Here we use the determinantal formulas for the functions \(F_\lambda \) (Theorem 3.9) and \(G_\mu \), \(G_{\nu /\lambda }\) to rewrite the probabilities (6.8) in a determinantal form. The formulas for \(G_\mu \) and \(G_{\nu /\lambda }\) are of Jacobi–Trudy type and follow from Cauchy identities and biorthogonality as in Sect. 5.4.

Recall the notation (3.11):

$$\begin{aligned} \varphi _k(x)= \frac{1}{y_{k+1}-x} \prod _{j=1}^{k} \frac{y_j-s_j^2x}{s_j^2(y_j-x)},\qquad k\ge 0. \end{aligned}$$

By Theorem 3.9, we have

$$\begin{aligned} F_\lambda (\rho )= \textrm{const}\cdot \det \left[ \varphi _{\lambda _j+N-j}(x_i) \right] _{i,j=1}^{N}, \end{aligned}$$
(B.1)

where the constant is independent of \(\lambda \) (we adopt this convention for all such constants throughout Appendix B, and will denote all of them by \(\textrm{const}\)).

Next, recall the functions \(\psi _k\) (5.1):

$$\begin{aligned} \psi _k(x) = \frac{y_{k+1}(s_{k+1}^2-1)}{y_{k+1}-s^2_{k+1}x}\, \prod _{j=1}^{k} \frac{s_j^2(y_j-x)}{y_j-s_j^2x},\qquad k\ge 1. \end{aligned}$$

For \(({\textbf{w}};\varvec{\uptheta })=(w_a,\ldots ,w_b;\theta _a,\ldots ,\theta _b)\), \(a\le b\), let us define a slight generalization of (5.13):

$$\begin{aligned} {\textsf{h}}_{k,p}({\textbf{w}};\varvec{\uptheta }):= \frac{1}{2\pi {\textbf{i}}}\oint _{\Gamma _{y,w}}dz\, \frac{\psi _k(z)}{y_p-z} \prod _{j=a}^{b}\frac{z-\theta _j^{-2}w_j}{z-w_j}, \qquad k\ge 0,\quad p\ge 1, \end{aligned}$$

where the integration contour \(\Gamma _{y,w}\) is positively oriented, surrounds all \(y_i, w_j\), and leaves out all \(s_i^{-2}y_i\). The function \(G_{\lambda ^{(1)}}\) in (6.8) has the following determinantal form (with \(a=b=1\) in \({\textsf{h}}_{k,l}\)):

$$\begin{aligned} G_{\lambda ^{(1)}} (w_1;\theta _1) = \textrm{const}\cdot \det \bigl [ {\textsf{h}}_{\lambda ^{(1)}_i+N-i,\,j}(w_1;\theta _1) \bigr ]_{i,j=1}^{N}. \end{aligned}$$
(B.2)

Finally, recall the functions \(\widetilde{{\textsf{h}}}_l\) (5.9) and \({\textsf{g}}_{l/k}\) (5.10):

$$\begin{aligned} {\textsf{g}}_{l/k}({\textbf{w}};\varvec{\uptheta }) = \widetilde{{\textsf{h}}}_{l-k}({\textbf{w}};\tau _k {\textbf{y}};\varvec{\uptheta };\tau _k {\textbf{s}}) = \frac{{\textbf{1}}_{l\ge k}}{2\pi {\textbf{i}}}\oint _{\Gamma _{y,w}} dz\, \varphi _k(z)\psi _l(z) \prod _{j=a}^{b}\frac{z-\theta _j^{-2}w_j}{z-w_j}, \end{aligned}$$

where the integration contour is around \(y_j,w_i\) and not \(s_j^{-2}y_j\), and \(({\textbf{w}};\varvec{\uptheta })=(w_a,\ldots ,w_b;\theta _a,\ldots ,\theta _b)\) with \(a\le b\). The skew functions in (6.8) take the following determinantal form:

$$\begin{aligned} G_{\lambda ^{(t)}/\lambda ^{(t-1)}}(w_t;\theta _t) = \det \bigl [ {\textsf{g}}_{(\lambda ^{(t)}_i+N-i)/(\lambda ^{(t-1)}_j+N-j)} (w_t;\theta _t) \bigr ]_{i,j=1}^{N}. \end{aligned}$$
(B.3)

We observe that when evaluated at a single pair of variables \((w;\theta )\), both \({\textsf{h}}_{k,j}\) (for \(k\ge j\)) and \({\textsf{g}}_{l/k}\) become explicit:

Lemma B.1

We have

$$\begin{aligned}{} & {} {\textsf{h}}_{k,j}(w;\theta )= {\left\{ \begin{array}{ll} \dfrac{w(1-\theta ^{-2})\psi _k(w)}{y_j-w},&{} \quad k \ge j;\\ \mathrm {\small {}a non-product expression},&{} \quad k< j, \end{array}\right. }\\{} & {} {\textsf{g}}_{l/k}(w;\theta )= {\left\{ \begin{array}{ll} w(1-\theta ^{-2})\varphi _k(w)\psi _l(w),&{} l>k;\\ \dfrac{\theta ^{-2}w-s_{k+1}^{-2}y_{k+1}}{w-s_{k+1}^{-2}y_{k+1}},&{} \quad l=k;\\ 0,&{} \quad l<k. \end{array}\right. } \end{aligned}$$

Proof

For \({\textsf{h}}_{k,j}\) with \(k\ge j\), the only pole inside the contour is \(z=w\), which leads to the desired formula. The exact form of the functions \({\textsf{h}}_{k,j}\) with \(k < j\) is not very explicit (apart from the original contour integral expression), but they are not involved in our computations.

For \({\textsf{g}}_{l/k}\), in the case \(l=k\), the only singularity outside the contour is \(z=s_{k+1}^{-2}y_k\), and for \(l>k\) the only singularity inside the contour is \(z=w\). The respective residues in these two cases lead to the desired formulas. For \(l<k\), there are no singularities outside the integration contours, and the integral vanishes. \(\square \)

Putting together (B.1), (B.2), and (B.3), we get:

Proposition B.2

The probability weights under the ascending FG process (6.8) have the following product-of-determinants form. For \(\ell ^{(t)}_j:=\lambda ^{(t)}_{j}+N+1-j\), we have

$$\begin{aligned} \begin{aligned}&\mathscr{A}\mathscr{P}(\lambda ^{(1)},\lambda ^{(2)},\ldots ,\lambda ^{(T)})\\&\quad = \textrm{const}\cdot \det \bigl [ {\textsf{h}}_{\ell ^{(1)}_i-1,\,j}(w_1;\theta _1) \bigr ] \prod _{t=2}^{T} \det \bigl [ {\textsf{g}}_{(\ell ^{(t)}_i-1)/(\ell ^{(t-1)}_j-1)} (w_t;\theta _t) \bigr ] \det \bigl [ \varphi _{\ell ^{(T)}_j-1}(x_i) \bigr ], \end{aligned}\nonumber \\ \end{aligned}$$

where all determinants are taken with respect to \(1\le i,j\le N\), and \(\textrm{const}\) is a normalizing constant which does not depend on the \(\ell ^{(j)}\)’s.

1.2 Application of the Eynard–Mehta theorem

The form of the probability weights as in Proposition B.2 puts the ascending FG process into the domain of applicability of the Eynard–Mehta theorem (see, for example, [31, 20, Theorem 1.4]). To express the determinantal correlation kernel of the point process

$$\begin{aligned}{} & {} \{(t,\ell ^{(t)}_j):t=1,\ldots ,T,\,j=1,\ldots ,N \}\subset \left\{ 1,\ldots ,T \right\} \times {\mathbb {Z}}_{\ge 1},\nonumber \\{} & {} \ell ^{(t)}_j=\lambda ^{(t)}_j+N+1-j, \end{aligned}$$
(B.4)

one first needs to invert the \(N\times N\) “Gram matrix” given by

$$\begin{aligned} M_{ij}=\sum _{a_1,\ldots ,a_m\ge 0}{\textsf{h}}_{a_1,i}(w_1;\theta _1) \, {\textsf{g}}_{a_2/a_1}(w_2;\theta _2) \ldots {\textsf{g}}_{a_T/a_{T-1}}(w_T;\theta _T) \, \varphi _{a_T}(x_j). \end{aligned}$$
(B.5)

Note that by Lemma B.1, this series converges absolutely under the condition (6.7).

Proposition B.3

We have

$$\begin{aligned} M_{ij}= \frac{1}{y_i-x_j}\prod _{t=1}^{T}\frac{x_j-\theta _t^{-2}w_t}{x_j-w_t}. \end{aligned}$$
(B.6)

The proof is based on the following lemma:

Lemma B.4

Let \(\bigl | \frac{u-s_j^{-2}y_j}{u-y_j} \frac{v-y_j}{v-s_j^{-2}y_j} \bigr |<1-\delta <1\) for all sufficiently large \(j\ge 1\). Then we have

$$\begin{aligned} \sum _{k=0}^{\infty }\varphi _k(u)\psi _k(v)=\frac{1}{u-v}. \end{aligned}$$

Proof

We have

$$\begin{aligned} \begin{aligned} \sum _{k=0}^{\infty }\varphi _k(u)\psi _k(v)&= \sum _{k=0}^{\infty } \frac{1}{u-y_{k+1}} \frac{y_{k+1}(1-s_{k+1}^{-2})}{v-s_{k+1}^{-2}y_{k+1}} \prod _{j=1}^{k} \frac{u-s_j^{-2}y_j}{u-y_j} \frac{v-y_j}{v-s_j^{-2}y_j}\\&= \sum _{k=0}^{\infty } \frac{1}{u-v}\left( 1- \frac{u-s_{k+1}^{-2}y_{k+1}}{u-y_{k+1}} \frac{v-y_{k+1}}{v-s_{k+1}^{-2}y_{k+1}} \right) \\&\quad \prod _{j=1}^{k} \frac{u-s_j^{-2}y_j}{u-y_j} \frac{v-y_j}{v-s_j^{-2}y_j}, \end{aligned}\nonumber \\ \end{aligned}$$

and the sum telescopes to \(1 / (u-v)\) if it converges (which holds under the condition in the hypothesis). \(\square \)

Proof of Proposition B.3

We represent \({\textsf{h}}_{a_1,i}\) as an integral over \(z_1\), and each \({\textsf{g}}_{a_t/a_{t-1}}\) as an integral over \(z_t\), \(2\le t\le T\). Initially all the integration variables belong to the same contour \(\Gamma _{y,w}\). However, in order to apply Lemma B.4 under the integrals, we need to have the following conditions on the contours for all sufficiently large \(k\ge 1\):

$$\begin{aligned} \Bigl | \frac{z_{t+1}-s_k^{-2}y_k}{z_{t+1}-y_k} \frac{z_t-y_k}{z_{t}-s_k^{-2}y_k} \Bigr |<1-\delta<1,\qquad \Bigl | \frac{x_j-s_k^{-2}y_k}{x_j-y_k} \frac{z_T-y_k}{z_{T}-s_k^{-2}y_k} \Bigr |<1-\delta <1, \end{aligned}$$

where \(t=1,\ldots ,T-1\), \(j=1,\ldots ,N \). Clearly, under certain restrictions on the parameters, such contours exist. Moreover, we may also choose them to be nested: \(z_1\) around all \(y_k\) and \(w_t\), \(z_{B}\) around \(z_A\) if \(B>A\), and all contours must leave outside all the points \(s_k^{-2}y_k\). On these contours, we have by Lemma B.4:

$$\begin{aligned} M_{ij}= \frac{1}{(2\pi {\textbf{i}})^{T}} \oint \ldots \oint \frac{1}{y_i-z_1} \frac{dz_1\ldots dz_T }{(x_j-z_T)(z_T-z_{T-1})\ldots (z_2-z_1)} \prod _{t=1}^{T}\frac{z_t-\theta _t^{-2}w_t}{z_t-w_t}. \end{aligned}$$

This integral is computed as follows. First, for \(z_T\) there is a single pole \(z_T=x_j\) outside the contour (and the integrand has the zero residue at infinity). Taking the residue clears the denominator \(x_j-z_T\) and substitutes \(z_T=x_j\). After that, we repeat the procedure for \(z_{T-1},\ldots ,z_1\), which leads to the desired formula.

Finally, the restrictions on the parameters under which the contours exist are lifted by an analytic continuation, since Lemmas B.1 and B.4 imply that the summation in (B.5) produces an a priori rational function. \(\square \)

The matrix \(M=[M_{ij}]_{i,j=1}^{N}\) is readily inverted:

Lemma B.5

We have, for \(i,j=1,\ldots ,N\),

$$\begin{aligned} \begin{aligned} M_{ij}^{-1}&= \frac{1}{x_i-y_j} \frac{\prod _{k=1}^{N}(x_i-y_k)(y_j-x_k)}{\prod _{k\ne i}(x_i-x_k)\prod _{k\ne j}(y_j-y_k)} \prod _{t=1}^{T}\frac{x_i-w_t}{x_i-\theta _t^{-2}w_t}\\&= \frac{1}{(2\pi {\textbf{i}})^2} \oint _{\Gamma _{x_i}}d\xi \oint _{\Gamma _{y_j}}d\eta \, \frac{1}{\xi -\eta } \prod _{k=1}^{N}\frac{(\xi -y_k)(\eta -x_k)}{(\xi -x_k)(\eta -y_k)} \prod _{t=1}^{T}\frac{\xi -w_t}{\xi -\theta _t^{-2}w_t}, \end{aligned}\nonumber \\ \end{aligned}$$
(B.7)

where the contours for \(\xi \) and \(\eta \) are small nonintersecting positively oriented circles around \(x_i\) and \(y_j\), respectively, which do not include any other poles of the integrand.

Proof

The first expression for \(M_{ij}^{-1}\) is obtained using the Cauchy determinant, since all minors (and hence all cofactors) of M are determinants of similar form. The contour integral expression corresponds to taking residues at the simple poles \(\xi =x_i\) and \(\eta =y_j\).

By the Eynard–Mehta theorem as in [20, Theorem 1.4], the correlation kernel of the determinantal point process (B.4) on \(\left\{ 1,\ldots ,T \right\} \times {\mathbb {Z}}_{\ge 1}\) takes the form (the shifts \(a+1,a'+1\) correspond to the shifts in the determinantal representation in Proposition B.2):

$$\begin{aligned}&K_{\mathscr{A}\mathscr{P}}(t,a+1;t',a'+1)\nonumber \\&\quad = -{\textbf{1}}_{t>t'}\sum _{\alpha _{t'+1},\ldots ,\alpha _{t-1}\ge 0 } {\textsf{g}}_{\alpha _{t'+1}/a'}(w_{t'+1};\theta _{t'+1}) \ldots {\textsf{g}}_{\alpha _{t-1}/\alpha _{t-2}}(w_{t-1};\theta _{t-1}) {\textsf{g}}_{a/\alpha _{t-1}}(w_t;\theta _t) \nonumber \\&\qquad + \sum _{i,j=1}^{N} M_{ji}^{-1} \sum _{\alpha _1,\ldots ,\alpha _{t-1}\ge 0 } {\textsf{h}}_{\alpha _1,i}(w_1;\theta _1) {\textsf{g}}_{\alpha _2/\alpha _1}(w_2;\theta _2) \ldots {\textsf{g}}_{a/\alpha _{t-1}}(w_t;\theta _t) \nonumber \\&\qquad \times \sum _{\beta _{t'+1},\ldots ,\beta _T\ge 0 } {\textsf{g}}_{\beta _{t'+1}/a'}(w_{t'+1};\theta _{t'+1}) \ldots {\textsf{g}}_{\beta _T/\beta _{T-1}}(w_T;\theta _T) \varphi _{\beta _T}(x_j). \end{aligned}$$
(B.8)

The iterated sums over the \(\alpha _j\)’s in the first and the second terms are finite and thus converge, and the sum over the \(\beta _j\)’s is infinite but converges under (6.7), see Lemma B.1.

1.3 Computation of the kernel

Let us now compute all the sums in (B.8), and arrive at the resulting formula for the correlation kernel.

For the first summand arising when \(t>t'\), we pass to the nested contours ( \(z_{B}\) around \(z_A\) if \(B>A\)) as in the proof of Proposition B.3. We obtain

$$\begin{aligned}&\sum _{\alpha _{t'+1},\ldots ,\alpha _{t-1}\ge 0 } {\textsf{g}}_{\alpha _{t'+1}/a'}(w_{t'+1};\theta _{t'+1}) \ldots {\textsf{g}}_{\alpha _{t-1}/\alpha _{t-2}}(w_{t-1};\theta _{t-1}) {\textsf{g}}_{a/\alpha _{t-1}}(w_t;\theta _t)\nonumber \\&\quad = \frac{1}{(2\pi {\textbf{i}})^{t-t'}} \oint \ldots \oint \varphi _{a'}(z_{t'+1})\psi _a(z_t)\, \frac{dz_{t'+1}\ldots dz_t}{(z_{t'+2}-z_{t'+1})\ldots (z_{t-1}-z_{t-2})(z_t-z_{t-1}) }\nonumber \\&\qquad \prod _{i=t'+1}^t \frac{z_i-\theta _i^{-2}w_i}{z_i-w_i}, \end{aligned}$$
(B.9)

where we extended the sum over the \(\alpha _j\)’s to all \(\alpha _j\ge 0\) under the integral, and the infinite sums under the integral are computed using Lemma B.4. Next, deforming the contours \(z_{t-1},z_{t-2},\ldots ,z_{t'+1} \) (in this order) to infinity, each integration in \(z_i\) picks up a residue at a single pole outside the integration contour at \(z_i=z_t\). This leaves a single integral:

$$\begin{aligned} (B.9)= \frac{1}{2\pi {\textbf{i}}} \oint _{\Gamma _{y,w}} dz \, \varphi _{a'}(z)\psi _a(z)\, \prod _{i=t'+1}^t \frac{z-\theta _i^{-2}w_i}{z-w_i}. \end{aligned}$$
(B.10)

Arguing in a similar manner, we can compute

$$\begin{aligned}&\sum _{\alpha _1,\ldots ,\alpha _{t-1}\ge 0 } {\textsf{h}}_{\alpha _1,i}(w_1;\theta _1) {\textsf{g}}_{\alpha _2/\alpha _1}(w_2;\theta _2) \ldots {\textsf{g}}_{a/\alpha _{t-1}}(w_t;\theta _t)\\&\quad = \frac{1}{(2\pi {\textbf{i}})^{t}} \oint \ldots \oint \frac{\psi _a(z_t)}{y_i-z_1} \frac{dz_1\ldots dz_t }{(z_2-z_1)(z_3-z_2)\ldots (z_t-z_{t-1}) } \prod _{d=1}^t \frac{z_d-\theta _d^{-2}w_d}{z_d-w_d}\\&\quad = \frac{1}{2\pi {\textbf{i}}} \oint _{\Gamma _{y,w}} \frac{\psi _a(z)\,dz}{y_i-z} \prod _{d=1}^t \frac{z-\theta _d^{-2}w_d}{z-w_d}, \end{aligned}$$

and

$$\begin{aligned}&\sum _{\beta _{t'+1},\ldots ,\beta _T\ge 0 } {\textsf{g}}_{\beta _{t'+1}/a'}(w_{t'+1};\theta _{t'+1}) \ldots {\textsf{g}}_{\beta _T/\beta _{T-1}}(w_T;\theta _T) \varphi _{\beta _T}(x_j)\\&\quad = \frac{1}{(2\pi {\textbf{i}})^{T-t'}} \oint \ldots \oint \frac{\varphi _{a'}(z_{t'+1})}{x_j-z_T} \frac{dz_{t'+1}\ldots dz_T}{(z_{t'+2}-z_{t'+1})\ldots (z_{T-1}-z_{T-2})(z_T-z_{T-1}) }\\&\qquad \prod _{c=t'+1}^T \frac{z_c-\theta _c^{-2}w_c}{z_c-w_c}\\&\quad = \varphi _{a'}(x_j) \prod _{c=t'+1}^T \frac{x_j-\theta _c^{-2}w_c}{x_j-w_c}. \end{aligned}$$

In the latter computation we pick the residues at \(z_T=x_j,\ldots ,z_{t'+1}=x_j\) (in this order), which is the only pole outside the corresponding integration contour. Finally, we take the last two quantities, multiply by \(M_{ji}^{-1}\), and sum as in (B.8). Using (B.7), we have

$$\begin{aligned}&\sum _{i,j=1}^N \frac{1}{(2\pi {\textbf{i}})^2} \oint _{\Gamma _{x_j}}d\xi \oint _{\Gamma _{y_i}} d \eta \, \frac{1}{\xi -\eta } \prod _{k=1}^{N}\frac{(\xi -y_k)(\eta -x_k)}{(\xi -x_k)(\eta -y_k)} \prod _{t=1}^{T}\frac{\xi -w_t}{\xi -\theta _t^{-2}w_t}\\&\qquad \times \frac{1}{2\pi {\textbf{i}}} \oint _{\Gamma _{y,w}} \frac{\psi _a(z)\,dz}{y_i-z} \prod _{d=1}^t \frac{z-\theta _d^{-2}w_d}{z-w_d} \,\varphi _{a'}(x_j) \prod _{c=t'+1}^T \frac{x_j-\theta _c^{-2}w_c}{x_j-w_c}\\&\quad = \frac{1}{(2\pi {\textbf{i}})^3} \oint _{\Gamma _{x}}d\xi \oint _{\Gamma _{y}} d \eta \oint _{\Gamma _{y,w}} dz \, \frac{1}{\eta -\xi } \frac{1}{\eta -z} \prod _{k=1}^{N}\frac{(\xi -y_k)(\eta -x_k)}{(\xi -x_k)(\eta -y_k)}\\&\qquad \times \frac{y_{a+1}(1-s_{a+1}^{-2})}{z-s_{a+1}^{-2}y_{a+1}} \frac{1}{y_{a'+1}-\xi } \prod _{j=1}^{a} \frac{z-y_j}{z-s_j^{-2}y_j} \prod _{j=1}^{a'} \frac{\xi -s_j^{-2}y_j}{\xi -y_j} \\&\qquad \prod _{d=1}^t \frac{z-\theta _d^{-2}w_d}{z-w_d} \prod _{c=1}^{t'}\frac{\xi -w_c}{\xi -\theta _c^{-2}w_c}. \end{aligned}$$

To obtain the latter expression we substituted \(x_j=\xi \), \(y_i=\eta \), and changed the contours \(\Gamma _x,\Gamma _y\) for these variables to encircle all \(x_k\)’s or all \(y_k\)’s, respectively, while leaving all other poles outside. Observe now that the only pole in \(\eta \) outside the integration contour which produces a nonzero residue is at \(\eta =z\). Indeed, the residue at \(\eta =\xi \) eliminates all poles inside the \(\xi \) contour, and thus vanishes. Therefore, we may continue the above computation as follows:

$$\begin{aligned}&= \frac{1}{(2\pi {\textbf{i}})^2} \oint _{\Gamma _{x}}d\xi \oint _{\Gamma _{y,w}} dz \, \frac{1}{z-\xi } \prod _{k=1}^{N}\frac{(\xi -y_k)(z-x_k)}{(\xi -x_k)(z-y_k)}\\&\quad \times \frac{y_{a+1}(1-s_{a+1}^{-2})}{z-s_{a+1}^{-2}y_{a+1}} \frac{1}{\xi -y_{a'+1}} \prod _{j=1}^{a} \frac{z-y_j}{z-s_j^{-2}y_j} \prod _{j=1}^{a'} \frac{\xi -s_j^{-2}y_j}{\xi -y_j}\\&\qquad \prod _{d=1}^t \frac{z-\theta _d^{-2}w_d}{z-w_d} \prod _{c=1}^{t'}\frac{\xi -w_c}{\xi -\theta _c^{-2}w_c}. \end{aligned}$$

Let us drag the \(\xi \) contour through infinity, so that now it encircles the z contour \(\Gamma _{y,w}\), and also all the points \(\theta _i^{-2}w_i\). This leads to an extra minus sign.

Finally, we need to add the additional summand (B.10) if \(t>t'\). In this case, observe that dragging the z contour so that it is outside of the \(\xi \) contour produces the same expression as (B.10), but with the opposite sign. Moreover, we need to undo the shifts \(a+1,a'+1\) corresponding to the determinantal representation in Proposition B.2. Renaming the integration variables as \(\xi =u\), \(z=v\) leads to the final expression for the correlation kernel of the ascending FG process:

$$\begin{aligned} \begin{aligned} K_{\mathscr{A}\mathscr{P}}(t,a;t',a') =&\frac{1}{(2\pi {\textbf{i}})^2} \oint _{\Gamma _{y,w,\theta ^{-2}w}}du \oint _{\Gamma _{y,w}} dv \, \frac{1}{u-v} \prod _{k=1}^{N}\frac{(u-y_k)(v-x_k)}{(u-x_k)(v-y_k)}\\&\times \frac{y_{a}(1-s_{a}^{-2})}{v-s_{a}^{-2}y_{a}} \frac{1}{u-y_{a'}} \prod _{j=1}^{a-1} \frac{v-y_j}{v-s_j^{-2}y_j} \prod _{j=1}^{a'-1} \frac{u-s_j^{-2}y_j}{u-y_j} \\&\prod _{d=1}^t \frac{v-\theta _d^{-2}w_d}{v-w_d} \prod _{c=1}^{t'}\frac{u-w_c}{u-\theta _c^{-2}w_c}. \end{aligned}\nonumber \\ \end{aligned}$$

where the u contour is outside for \(t\le t'\), and the v contour is outside for \(t>t'\). This completes the proof of Theorem 6.7 in the ascending FG process case.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Aggarwal, A., Borodin, A., Petrov, L. et al. Free fermion six vertex model: symmetric functions and random domino tilings. Sel. Math. New Ser. 29, 36 (2023). https://doi.org/10.1007/s00029-023-00837-y

Download citation

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s00029-023-00837-y

Mathematics Subject Classification

Navigation